MulayCap: Multi-layer Human Performance Capture Using A Monocular Video
Camera
- URL: http://arxiv.org/abs/2004.05815v3
- Date: Thu, 1 Oct 2020 08:00:34 GMT
- Title: MulayCap: Multi-layer Human Performance Capture Using A Monocular Video
Camera
- Authors: Zhaoqi Su and Weilin Wan and Tao Yu and Lingjie Liu and Lu Fang and
Wenping Wang and Yebin Liu
- Abstract summary: We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning.
The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively.
MulayCap can be applied to various important editing applications, such as cloth editing, re-targeting, relighting, and AR applications.
- Score: 68.51530260071914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MulayCap, a novel human performance capture method using a
monocular video camera without the need for pre-scanning. The method uses
"multi-layer" representations for geometry reconstruction and texture
rendering, respectively. For geometry reconstruction, we decompose the clothed
human into multiple geometry layers, namely a body mesh layer and a garment
piece layer. The key technique behind is a Garment-from-Video (GfV) method for
optimizing the garment shape and reconstructing the dynamic cloth to fit the
input video sequence, based on a cloth simulation model which is effectively
solved with gradient descent. For texture rendering, we decompose each input
image frame into a shading layer and an albedo layer, and propose a method for
fusing a fixed albedo map and solving for detailed garment geometry using the
shading layer. Compared with existing single view human performance capture
systems, our "multi-layer" approach bypasses the tedious and time consuming
scanning step for obtaining a human specific mesh template. Experimental
results demonstrate that MulayCap produces realistic rendering of dynamically
changing details that has not been achieved in any previous monocular video
camera systems. Benefiting from its fully semantic modeling, MulayCap can be
applied to various important editing applications, such as cloth editing,
re-targeting, relighting, and AR applications.
Related papers
- Interactive Rendering of Relightable and Animatable Gaussian Avatars [37.73483372890271]
We propose a simple and efficient method to decouple body materials and lighting from multi-view or monocular avatar videos.
Our method can render higher quality results at a faster speed on both synthetic and real datasets.
arXiv Detail & Related papers (2024-07-15T13:25:07Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Lester: rotoscope animation through video object segmentation and
tracking [0.0]
Lester is a novel method to automatically synthetise retro-style 2D animations from videos.
Video frames are processed with the Segment Anything Model (SAM) and the resulting masks are tracked through subsequent frames with DeAOT.
Results show that the method exhibits an excellent temporal consistency and can correctly process videos with different poses and appearances.
arXiv Detail & Related papers (2024-02-15T11:15:54Z) - OmnimatteRF: Robust Omnimatte with 3D Background Modeling [42.844343885602214]
We propose a novel video matting method, OmnimatteRF, that combines dynamic 2D foreground layers and a 3D background model.
The 2D layers preserve the details of the subjects, while the 3D background robustly reconstructs scenes in real-world videos.
arXiv Detail & Related papers (2023-09-14T14:36:22Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - WALDO: Future Video Synthesis using Object Layer Decomposition and
Parametric Flow Prediction [82.79642869586587]
WALDO is a novel approach to the prediction of future video frames from past ones.
Individual images are decomposed into multiple layers combining object masks and a small set of control points.
The layer structure is shared across all frames in each video to build dense inter-frame connections.
arXiv Detail & Related papers (2022-11-25T18:59:46Z) - SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware
Inpainting [54.419266357283966]
Single image 3D photography enables viewers to view a still image from novel viewpoints.
Recent approaches combine monocular depth networks with inpainting networks to achieve compelling results.
We present SLIDE, a modular and unified system for single image 3D photography.
arXiv Detail & Related papers (2021-09-02T16:37:20Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - MonoClothCap: Towards Temporally Coherent Clothing Capture from
Monocular RGB Video [10.679773937444445]
We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input.
We build statistical deformation models for three types of clothing: T-shirt, short pants and long pants.
Our method produces temporally coherent reconstruction of body and clothing from monocular video.
arXiv Detail & Related papers (2020-09-22T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.