Shape from Projections via Differentiable Forward Projector for Computed
Tomography
- URL: http://arxiv.org/abs/2006.16120v4
- Date: Thu, 11 Mar 2021 08:31:41 GMT
- Title: Shape from Projections via Differentiable Forward Projector for Computed
Tomography
- Authors: Jakeoung Koo, Anders B. Dahl, J. Andreas B{\ae}rentzen, Qiongyang
Chen, Sara Bals, Vedrana A. Dahl
- Abstract summary: We propose a differentiable forward model for 3D meshes that bridge the gap between the forward model for 3D surfaces and optimization.
We use the proposed forward model to reconstruct 3D shapes directly from projections.
Experimental results for single-object problems show that the proposed method outperforms traditional voxel-based methods on noisy simulated data.
- Score: 4.304380400377787
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In computed tomography, the reconstruction is typically obtained on a voxel
grid. In this work, however, we propose a mesh-based reconstruction method. For
tomographic problems, 3D meshes have mostly been studied to simulate data
acquisition, but not for reconstruction, for which a 3D mesh means the inverse
process of estimating shapes from projections. In this paper, we propose a
differentiable forward model for 3D meshes that bridge the gap between the
forward model for 3D surfaces and optimization. We view the forward projection
as a rendering process, and make it differentiable by extending recent work in
differentiable rendering. We use the proposed forward model to reconstruct 3D
shapes directly from projections. Experimental results for single-object
problems show that the proposed method outperforms traditional voxel-based
methods on noisy simulated data. We also apply the proposed method on electron
tomography images of nanoparticles to demonstrate the applicability of the
method on real data.
Related papers
- Personalized 3D Human Pose and Shape Refinement [19.082329060985455]
regression-based methods have dominated the field of 3D human pose and shape estimation.
We propose to construct dense correspondences between initial human model estimates and the corresponding images.
We show that our approach not only consistently leads to better image-model alignment, but also to improved 3D accuracy.
arXiv Detail & Related papers (2024-03-18T10:13:53Z) - Disjoint Pose and Shape for 3D Face Reconstruction [4.096453902709292]
We propose an end-to-end pipeline that disjointly solves for pose and shape to make the optimization stable and accurate.
The proposed method achieves end-to-end topological consistency, enables iterative face pose refinement procedure, and show remarkable improvement on both quantitative and qualitative results.
arXiv Detail & Related papers (2023-08-26T15:18:32Z) - Unsupervised 3D Pose Estimation with Non-Rigid Structure-from-Motion
Modeling [83.76377808476039]
We propose a new modeling method for human pose deformations and design an accompanying diffusion-based motion prior.
Inspired by the field of non-rigid structure-from-motion, we divide the task of reconstructing 3D human skeletons in motion into the estimation of a 3D reference skeleton.
A mixed spatial-temporal NRSfMformer is used to simultaneously estimate the 3D reference skeleton and the skeleton deformation of each frame from 2D observations sequence.
arXiv Detail & Related papers (2023-08-18T16:41:57Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models [33.343489006271255]
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples.
We propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all dimensions.
Our method can be run in a single commodity GPU, and establishes the new state-of-the-art.
arXiv Detail & Related papers (2022-11-19T10:32:21Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Data-Driven Shadowgraph Simulation of a 3D Object [50.591267188664666]
We are replacing the numerical code by a computationally cheaper projection based surrogate model.
The model is able to approximate the electric fields at a given time without computing all preceding electric fields as required by numerical methods.
This model has shown a good quality reconstruction in a problem of perturbation of data within a narrow range of simulation parameters and can be used for input data of large size.
arXiv Detail & Related papers (2021-06-01T08:46:04Z) - An Effective Loss Function for Generating 3D Models from Single 2D Image
without Rendering [0.0]
Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction.
Currents use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape.
We propose a novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette.
arXiv Detail & Related papers (2021-03-05T00:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.