Best Foot Forward: Robust Foot Reconstruction in-the-wild
- URL: http://arxiv.org/abs/2502.20511v1
- Date: Thu, 27 Feb 2025 20:40:20 GMT
- Title: Best Foot Forward: Robust Foot Reconstruction in-the-wild
- Authors: Kyle Fogarty, Jing Yang, Chayan Kumar Patodi, Aadi Bhanti, Steven Chacko, Cengiz Oztireli, Ujwal Bonde,
- Abstract summary: We present a novel end-to-end pipeline that refines Structure-from-Motion (SfM) reconstruction.<n>It first resolves scan alignment ambiguities using SE(3) canonicalization with a viewpoint prediction module, then completes missing geometry through an attention-based network trained on synthetically augmented point clouds.<n>Our approach achieves state-of-the-art performance on reconstruction metrics while preserving clinically validated anatomical fidelity.
- Score: 2.059210052546126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate 3D foot reconstruction is crucial for personalized orthotics, digital healthcare, and virtual fittings. However, existing methods struggle with incomplete scans and anatomical variations, particularly in self-scanning scenarios where user mobility is limited, making it difficult to capture areas like the arch and heel. We present a novel end-to-end pipeline that refines Structure-from-Motion (SfM) reconstruction. It first resolves scan alignment ambiguities using SE(3) canonicalization with a viewpoint prediction module, then completes missing geometry through an attention-based network trained on synthetically augmented point clouds. Our approach achieves state-of-the-art performance on reconstruction metrics while preserving clinically validated anatomical fidelity. By combining synthetic training data with learned geometric priors, we enable robust foot reconstruction under real-world capture conditions, unlocking new opportunities for mobile-based 3D scanning in healthcare and retail.
Related papers
- IXGS-Intraoperative 3D Reconstruction from Sparse, Arbitrarily Posed Real X-rays [1.2721397985664153]
We extend the $R2$-Gaussian splatting framework to reconstruct consistent 3D volumes under challenging conditions.
We introduce an anatomy-guided radiographic standardization step using style transfer, improving visual consistency across views.
arXiv Detail & Related papers (2025-04-20T18:28:13Z) - CrossSDF: 3D Reconstruction of Thin Structures From Cross-Sections [23.35977941611922]
CrossSDF is a novel approach for extracting a 3D signed distance field from 2D signed distances generated from planar contours.<n>Our results demonstrate a significant improvement over existing methods, effectively reconstructing thin structures and producing accurate 3D models.
arXiv Detail & Related papers (2024-12-05T12:38:18Z) - EndoSparse: Real-Time Sparse View Synthesis of Endoscopic Scenes using Gaussian Splatting [39.60431471170721]
3D reconstruction of biological tissues from a collection of endoscopic images is a key to unlock various important downstream surgical applications with 3D capabilities.
Existing methods employ various advanced neural rendering techniques for view synthesis, but they often struggle to recover accurate 3D representations when only sparse observations are available.
We propose a framework leveraging the prior knowledge from multiple foundation models during the reconstruction process, dubbed as textitEndoSparse.
arXiv Detail & Related papers (2024-07-01T07:24:09Z) - Domain adaptation strategies for 3D reconstruction of the lumbar spine using real fluoroscopy data [9.21828361691977]
This study tackles key obstacles in adopting surgical navigation in orthopedic surgeries.
It shows an approach for generating 3D anatomical models of the spine from only a few fluoroscopic images.
It achieved an 84% F1 score, matching the accuracy of our previous synthetic data-based research.
arXiv Detail & Related papers (2024-01-29T10:22:45Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.