Inverse Rendering for High-Genus 3D Surface Meshes from Multi-view Images with Persistent Homology Priors
- URL: http://arxiv.org/abs/2601.12155v1
- Date: Sat, 17 Jan 2026 20:06:19 GMT
- Title: Inverse Rendering for High-Genus 3D Surface Meshes from Multi-view Images with Persistent Homology Priors
- Authors: Xiang Gao, Xinmu Wang, Yuanpeng Liu, Yue Wang, Junqi Huang, Wei Chen, Xianfeng Gu,
- Abstract summary: Reconstructing 3D objects from images is inherently an ill-posed problem due to ambiguities in geometry, appearance, and topology.<n>This paper introduces collaborative rendering with persistent homology priors, a novel strategy that leverages topological constraints to resolve these ambiguities.
- Score: 11.227213428407673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing 3D objects from images is inherently an ill-posed problem due to ambiguities in geometry, appearance, and topology. This paper introduces collaborative inverse rendering with persistent homology priors, a novel strategy that leverages topological constraints to resolve these ambiguities. By incorporating priors that capture critical features such as tunnel loops and handle loops, our approach directly addresses the difficulty of reconstructing high-genus surfaces. The collaboration between photometric consistency from multi-view images and homology-based guidance enables recovery of complex high-genus geometry while circumventing catastrophic failures such as collapsing tunnels or losing high-genus structure. Instead of neural networks, our method relies on gradient-based optimization within a mesh-based inverse rendering framework to highlight the role of topological priors. Experimental results show that incorporating persistent homology priors leads to lower Chamfer Distance (CD) and higher Volume IoU compared to state-of-the-art mesh-based methods, demonstrating improved geometric accuracy and robustness against topological failure.
Related papers
- A Diffusion-Based Generative Prior Approach to Sparse-view Computed Tomography [1.0960289997471082]
We present a framework for the reconstruction of X-rays CT images from sparse geometries using deep generative models.<n>The results obtained even under highly sparse geometries are very promising, although further research is clearly needed in this direction.
arXiv Detail & Related papers (2026-02-11T10:27:41Z) - Joint Geometry-Appearance Human Reconstruction in a Unified Latent Space via Bridge Diffusion [57.09673862519791]
This paper introduces textbfJGA-LBD, a novel framework that unifies the modeling of geometry and appearance into a joint latent representation.<n> Experiments demonstrate that JGA-LBD outperforms current state-of-the-art approaches in terms of both geometry fidelity and appearance quality.
arXiv Detail & Related papers (2026-01-01T12:48:56Z) - Inverse Rendering for High-Genus Surface Meshes from Multi-View Images [23.03019377701584]
Mesh-based representations are preferred as they enable the application of differential geometry theory and are optimized for modern graphics pipelines.<n>Existing inverse rendering methods often fail catastrophically on high-genus surfaces, leading to the loss of key topological features.<n>We present a topology-informed inverse rendering approach for reconstructing high-genus surface meshes from multi-view images.
arXiv Detail & Related papers (2025-11-24T01:44:09Z) - Rethinking Multimodal Point Cloud Completion: A Completion-by-Correction Perspective [8.276620253870338]
Point cloud completion aims to reconstruct complete 3D shapes from partial observations.<n>Most methods still follow a Completion-by-Inpainting paradigm.<n>We propose Completion-by-Correction, which begins with a complete shape prior and performs feature-space correction to align it with the partial observation.
arXiv Detail & Related papers (2025-11-15T11:51:13Z) - PRGCN: A Graph Memory Network for Cross-Sequence Pattern Reuse in 3D Human Pose Estimation [18.771349697842947]
This work introduces the Pattern Reuse Graph Conal Network (PRGCN), a novel framework that formalizes pose estimation as a problem of pattern retrieval and adaptation.<n>At its core, PRGCN features a graph memory bank that learns and stores a compact set of pose prototypes, encoded as relational graphs, which are dynamically retrieved via an attention mechanism to provide structured priors.<n>Our work posits that PRGCN establishes a new state-of-the-art, achieving an MPJPE of 37.1mm and 13.4mm, respectively, while exhibiting enhanced cross-domain generalization capability.
arXiv Detail & Related papers (2025-10-22T11:12:07Z) - Dense Semantic Matching with VGGT Prior [49.42199006453071]
We propose an approach that retains VGGT's intrinsic strengths by reusing early feature stages, fine-tuning later ones, and adding a semantic head for bidirectional correspondences.<n>Our approach achieves superior geometry awareness, matching reliability, and manifold preservation, outperforming previous baselines.
arXiv Detail & Related papers (2025-09-25T14:56:11Z) - Sparse-View 3D Reconstruction: Recent Advances and Open Challenges [0.8583178253811411]
Sparse-view 3D reconstruction is essential for applications in which dense image acquisition is impractical.<n>This survey reviews the latest advances in neural implicit models and explicit point-cloud-based approaches.<n>We analyze how geometric regularization, explicit shape modeling, and generative inference are used to mitigate artifacts.
arXiv Detail & Related papers (2025-07-22T09:57:28Z) - Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation [62.87088388345378]
We introduce a diffusion-based framework that performs aligned novel view image and geometry generation via a warping-and-inpainting methodology.<n>Method leverages off-the-shelf geometry predictors to predict partial geometries viewed from reference images.<n>Cross-modal attention distillation is proposed to ensure accurate alignment between generated images and geometry.
arXiv Detail & Related papers (2025-06-13T16:19:00Z) - Geometric Prior-Guided Neural Implicit Surface Reconstruction in the Wild [13.109693095684921]
We introduce a novel approach that applies multiple geometric constraints to the implicit surface optimization process.<n>First, we utilize sparse 3D points from structure-from-motion (SfM) to refine the signed distance function estimation for the reconstructed surface.<n>We also employ robust normal priors derived from a normal predictor, enhanced by edge prior filtering and multi-view consistency constraints.
arXiv Detail & Related papers (2025-05-12T09:17:30Z) - Reconstructing Topology-Consistent Face Mesh by Volume Rendering from Multi-View Images [71.20113392204183]
Industrial 3D face assets creation typically reconstructs topology-consistent face meshes from multi-view images for downstream production.<n>NeRF has shown great advantages in 3D reconstruction, by representing scenes as density and radiance fields.<n>We introduce a novel method which combines explicit mesh with neural volume rendering to optimize geometry of an artist-made template face mesh from multi-view images.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.