TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network
- URL: http://arxiv.org/abs/2208.08768v2
- Date: Mon, 22 Aug 2022 14:45:12 GMT
- Title: TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network
- Authors: Ahmet Serdar Karadeniz, Sk Aziz Ali, Anis Kacem, Elona Dupont, Djamila
Aouada
- Abstract summary: Reconstructing 3D human body shapes from 3D partial textured scans is a fundamental task for many computer vision and graphics applications.
We propose a new neural network architecture for 3D body shape and high-resolution texture completion.
- Score: 14.389603490486364
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reconstructing 3D human body shapes from 3D partial textured scans remains a
fundamental task for many computer vision and graphics applications -- e.g.,
body animation, and virtual dressing. We propose a new neural network
architecture for 3D body shape and high-resolution texture completion --
BCom-Net -- that can reconstruct the full geometry from mid-level to high-level
partial input scans. We decompose the overall reconstruction task into two
stages - first, a joint implicit learning network (SCom-Net and TCom-Net) that
takes a voxelized scan and its occupancy grid as input to reconstruct the full
body shape and predict vertex textures. Second, a high-resolution texture
completion network, that utilizes the predicted coarse vertex textures to
inpaint the missing parts of the partial 'texture atlas'. A thorough
experimental evaluation on 3DBodyTex.V2 dataset shows that our method achieves
competitive results with respect to the state-of-the-art while generalizing to
different types and levels of partial shapes. The proposed method has also
ranked second in the track1 of SHApe Recovery from Partial textured 3D scans
(SHARP [38,1]) 2022 challenge1.
Related papers
- Inferring Implicit 3D Representations from Human Figures on Pictorial
Maps [1.0499611180329804]
We present an automated workflow to bring human figures, one of the most frequently appearing entities on pictorial maps, to the third dimension.
We first let a network consisting of fully connected layers estimate the depth coordinate of 2D pose points.
The gained 3D pose points are inputted together with 2D masks of body parts into a deep implicit surface network to infer 3D signed distance fields.
arXiv Detail & Related papers (2022-08-30T19:29:18Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - Implicit Feature Networks for Texture Completion from Partial 3D Data [56.93289686162015]
We generalize IF-Nets to texture completion from partial textured scans of humans and arbitrary objects.
Our model successfully in-paints the missing texture parts in consistence with the completed geometry.
arXiv Detail & Related papers (2020-09-20T15:48:17Z) - AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph [54.701098964773756]
We aim to recover 3D objects with semantic parts and can be directly edited.
Our work makes an attempt towards recovering two types of primitive-shaped objects, namely, generalized cuboids and generalized cylinders.
Our algorithm can recover high quality 3D models and outperforms existing methods in both instance segmentation and 3D reconstruction.
arXiv Detail & Related papers (2020-05-27T12:16:24Z) - CoReNet: Coherent 3D scene reconstruction from a single RGB image [43.74240268086773]
We build on advances in deep learning to reconstruct the shape of a single object given only one RBG image as input.
We propose three extensions: (1) ray-traced skip connections that propagate local 2D information to the output 3D volume in a physically correct manner; (2) a hybrid 3D volume representation that enables building translation equivariant models; and (3) a reconstruction loss tailored to capture overall object geometry.
We reconstruct all objects jointly in one pass, producing a coherent reconstruction, where all objects live in a single consistent 3D coordinate frame relative to the camera and they do not intersect in 3D space.
arXiv Detail & Related papers (2020-04-27T17:53:07Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.