Novel-View Human Action Synthesis
- URL: http://arxiv.org/abs/2007.02808v3
- Date: Thu, 8 Oct 2020 10:02:36 GMT
- Title: Novel-View Human Action Synthesis
- Authors: Mohamed Ilyes Lakhal, Davide Boscaini, Fabio Poiesi, Oswald Lanz,
Andrea Cavallaro
- Abstract summary: We present a novel 3D reasoning to synthesize the target viewpoint.
We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh.
We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally.
- Score: 39.72702883597454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel-View Human Action Synthesis aims to synthesize the movement of a body
from a virtual viewpoint, given a video from a real viewpoint. We present a
novel 3D reasoning to synthesize the target viewpoint. We first estimate the 3D
mesh of the target body and transfer the rough textures from the 2D images to
the mesh. As this transfer may generate sparse textures on the mesh due to
frame resolution or occlusions. We produce a semi-dense textured mesh by
propagating the transferred textures both locally, within local geodesic
neighborhoods, and globally, across symmetric semantic parts. Next, we
introduce a context-based generator to learn how to correct and complete the
residual appearance information. This allows the network to independently focus
on learning the foreground and background synthesis tasks. We validate the
proposed solution on the public NTU RGB+D dataset. The code and resources are
available at https://bit.ly/36u3h4K.
Related papers
- Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Mesh Neural Cellular Automata [62.101063045659906]
We propose Mesh Neural Cellular Automata (MeshNCA), a method that directly synthesizes dynamic textures on 3D meshes without requiring any UV maps.
Only trained on an Icosphere mesh, MeshNCA shows remarkable test-time generalization and can synthesize textures on unseen meshes in real time.
arXiv Detail & Related papers (2023-11-06T01:54:37Z) - iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis [45.88928345042103]
We present a method for generating consistent novel views from a single source image.
Our approach focuses on maximizing the reuse of visible pixels from the source image.
We use a monocular depth estimator that transfers visible pixels from the source view to the target view.
arXiv Detail & Related papers (2023-10-24T20:33:19Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Generating Texture for 3D Human Avatar from a Single Image using
Sampling and Refinement Networks [8.659903550327442]
We propose a texture synthesis method for a 3D human avatar that incorporates geometry information.
A sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh.
To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map.
arXiv Detail & Related papers (2023-05-01T16:44:02Z) - CompNVS: Novel View Synthesis with Scene Completion [83.19663671794596]
We propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts.
We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area.
Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering.
arXiv Detail & Related papers (2022-07-23T09:03:13Z) - Continuous Object Representation Networks: Novel View Synthesis without
Target View Supervision [26.885846254261626]
Continuous Object Representation Networks (CORN) is a conditional architecture that encodes an input image's geometry and appearance that map to a 3D consistent scene representation.
CORN achieves well on challenging tasks such as novel view synthesis and single-view 3D reconstruction and performance comparable to state-of-the-art approaches that use direct supervision.
arXiv Detail & Related papers (2020-07-30T17:49:44Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.