High-Resolution Augmentation for Automatic Template-Based Matching of
Human Models
- URL: http://arxiv.org/abs/2009.09312v1
- Date: Sat, 19 Sep 2020 22:41:24 GMT
- Title: High-Resolution Augmentation for Automatic Template-Based Matching of
Human Models
- Authors: Riccardo Marin, Simone Melzi, Emanuele Rodol\`a, Umberto Castellani
- Abstract summary: We propose a new approach for 3D shape matching of deformable human shapes.
Our approach is based on the joint adoption of three different tools: an intrinsic spectral matching pipeline, a morphable model, and an extrinsic details refinement.
- Score: 13.45311874573311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach for 3D shape matching of deformable human shapes.
Our approach is based on the joint adoption of three different tools: an
intrinsic spectral matching pipeline, a morphable model, and an extrinsic
details refinement. By operating in conjunction, these tools allow us to
greatly improve the quality of the matching while at the same time resolving
the key issues exhibited by each tool individually. In this paper we present an
innovative High-Resolution Augmentation (HRA) strategy that enables highly
accurate correspondence even in the presence of significant mesh resolution
mismatch between the input shapes. This augmentation provides an effective
workaround for the resolution limitations imposed by the adopted morphable
model. The HRA in its global and localized versions represents a novel
refinement strategy for surface subdivision methods. We demonstrate the
accuracy of the proposed pipeline on multiple challenging benchmarks, and
showcase its effectiveness in surface registration and texture transfer.
Related papers
- Dora: Sampling and Benchmarking for 3D Shape Variational Auto-Encoders [87.17440422575721]
We present Dora-VAE, a novel approach that enhances VAE reconstruction through our proposed sharp edge sampling strategy and a dual cross-attention mechanism.
To systematically evaluate VAE reconstruction quality, we additionally propose Dora-bench, a benchmark that quantifies shape complexity through the density of sharp edges.
arXiv Detail & Related papers (2024-12-23T18:59:06Z) - ProbeSDF: Light Field Probes for Neural Surface Reconstruction [4.0130618054041385]
SDF-based differential rendering frameworks have achieved state-of-the-art multiview 3D shape reconstruction.
We re-examine this family of approaches by minimally reformulating its core appearance model.
We show this performance to be consistently achieved on real data over two widely different and popular application fields.
arXiv Detail & Related papers (2024-12-13T12:18:26Z) - Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields [50.12118098874321]
We introduce a latent 3D diffusion process for neural voxel fields, enabling generation at significantly higher resolutions.
A part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding the accurate part decomposition.
The results demonstrate the superior generative capabilities of our proposed method in part-aware shape generation, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-05-02T04:31:17Z) - SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical
Refinement and EM optimization [6.886220026399106]
We introduce Multi-View Stereo (SD-MVS) to tackle challenges in 3D reconstruction of textureless areas.
We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes.
We propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths.
arXiv Detail & Related papers (2024-01-12T05:25:57Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Unifying Flow, Stereo and Depth Estimation [121.54066319299261]
We present a unified formulation and model for three motion and 3D perception tasks.
We formulate all three tasks as a unified dense correspondence matching problem.
Our model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks.
arXiv Detail & Related papers (2022-11-10T18:59:54Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.