Muscles in Action
- URL: http://arxiv.org/abs/2212.02978v3
- Date: Mon, 20 Mar 2023 19:10:22 GMT
- Title: Muscles in Action
- Authors: Mia Chiquier, Carl Vondrick
- Abstract summary: We present a new dataset, Muscles in Action (MIA), to learn to incorporate muscle activity into human motion representations.
We learn a bidirectional representation that predicts muscle activation from video, and conversely, reconstructs motion from muscle activation.
Putting muscles into computer vision systems will enable richer models of virtual humans, with applications in sports, fitness, and AR/VR.
- Score: 22.482090207522358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion is created by, and constrained by, our muscles. We take a first
step at building computer vision methods that represent the internal muscle
activity that causes motion. We present a new dataset, Muscles in Action (MIA),
to learn to incorporate muscle activity into human motion representations. The
dataset consists of 12.5 hours of synchronized video and surface
electromyography (sEMG) data of 10 subjects performing various exercises. Using
this dataset, we learn a bidirectional representation that predicts muscle
activation from video, and conversely, reconstructs motion from muscle
activation. We evaluate our model on in-distribution subjects and exercises, as
well as on out-of-distribution subjects and exercises. We demonstrate how
advances in modeling both modalities jointly can serve as conditioning for
muscularly consistent motion generation. Putting muscles into computer vision
systems will enable richer models of virtual humans, with applications in
sports, fitness, and AR/VR.
Related papers
- Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion-based model that synthesizes full-body reactive motion of a person in a two-person interaction scenario.
We demonstrate ReMoS across challenging two-person scenarios such as pair-dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two-person interactions containing full-body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - Modelling Human Visual Motion Processing with Trainable Motion Energy
Sensing and a Self-attention Network [1.9458156037869137]
We propose an image-computable model of human motion perception by bridging the gap between biological and computer vision models.
This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system.
In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning.
arXiv Detail & Related papers (2023-05-16T04:16:07Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - From Motion to Muscle [0.0]
We show that muscle activity can be artificially generated based on motion features such as position, velocity, and acceleration.
The model achieves remarkable precision for previously trained movements and maintains significantly high precision for new movements that have not been previously trained.
arXiv Detail & Related papers (2022-01-27T13:30:17Z) - OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical
Locomotion [8.849771760994273]
We release a 3D musculoskeletal simulation of an ostrich based on the MuJoCo simulator.
The model is based on CT scans and dissections used to gather actual muscle data.
We also provide a set of reinforcement learning tasks, including reference motion tracking and a reaching task with the neck.
arXiv Detail & Related papers (2021-12-11T19:58:11Z) - Learning Control Policies for Imitating Human Gaits [2.28438857884398]
Humans exhibit movements like walking, running, and jumping in the most efficient manner, which served as the source of motivation for this project.
Skeletal and Musculoskeletal human models were considered for motions in the sagittal plane.
Model-free reinforcement learning algorithms were used to optimize inverse dynamics control actions.
arXiv Detail & Related papers (2021-05-15T16:33:24Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.