Capturing Shape Information with Multi-Scale Topological Loss Terms for
3D Reconstruction
- URL: http://arxiv.org/abs/2203.01703v1
- Date: Thu, 3 Mar 2022 13:18:21 GMT
- Title: Capturing Shape Information with Multi-Scale Topological Loss Terms for
3D Reconstruction
- Authors: Dominik J. E. Waibel, Scott Atwell, Matthias Meier, Carsten Marr, and
Bastian Rieck
- Abstract summary: We propose to complement geometrical shape information by including multi-scale topological features, such as connected components, cycles, and voids, in the reconstruction loss.
Our method calculates topological features from 3D volumetric data based on cubical complexes and uses an optimal transport distance to guide the reconstruction process.
We demonstrate the utility of our loss by incorporating it into SHAPR, a model for predicting the 3D cell shape of individual cells based on 2D microscopy images.
- Score: 7.323706635751351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing 3D objects from 2D images is both challenging for our brains
and machine learning algorithms. To support this spatial reasoning task,
contextual information about the overall shape of an object is critical.
However, such information is not captured by established loss terms (e.g. Dice
loss). We propose to complement geometrical shape information by including
multi-scale topological features, such as connected components, cycles, and
voids, in the reconstruction loss. Our method calculates topological features
from 3D volumetric data based on cubical complexes and uses an optimal
transport distance to guide the reconstruction process. This topology-aware
loss is fully differentiable, computationally efficient, and can be added to
any neural network. We demonstrate the utility of our loss by incorporating it
into SHAPR, a model for predicting the 3D cell shape of individual cells based
on 2D microscopy images. Using a hybrid loss that leverages both geometrical
and topological information of single objects to assess their shape, we find
that topological information substantially improves the quality of
reconstructions, thus highlighting its ability to extract more relevant
features from image datasets.
Related papers
- Scalar Function Topology Divergence: Comparing Topology of 3D Objects [21.49200381462702]
We propose a new topological tool for computer vision - Scalar Function Topology Divergence (SFTD)
SFTD measures the dissimilarity of multi-scale topology between sublevel sets of two functions having a common domain.
The proposed tool provides useful visualizations depicting areas where functions have topological dissimilarities.
arXiv Detail & Related papers (2024-07-11T10:18:54Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - 3 Dimensional Dense Reconstruction: A Review of Algorithms and Dataset [19.7595986056387]
3D dense reconstruction refers to the process of obtaining the complete shape and texture features of 3D objects from 2D planar images.
This work systematically introduces classical methods of 3D dense reconstruction based on geometric and optical models.
It also introduces datasets for deep learning and the performance and advantages and disadvantages demonstrated by deep learning methods on these datasets.
arXiv Detail & Related papers (2023-04-19T01:56:55Z) - Euler Characteristic Transform Based Topological Loss for Reconstructing
3D Images from Single 2D Slices [9.646922337783137]
We propose a novel topological loss function based on the Euler Characteristic Transform.
This loss can be used as an inductive bias to aid the optimization of any neural network toward better reconstructions in the regime of limited data.
We show the effectiveness of the proposed loss function by incorporating it into SHAPR, a state-of-the-art shape reconstruction model.
arXiv Detail & Related papers (2023-03-08T02:12:17Z) - 3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow [61.62796058294777]
Reconstructing 3D shape from a single 2D image is a challenging task.
Most of the previous methods still struggle to extract semantic attributes for 3D reconstruction task.
We propose 3DAttriFlow to disentangle and extract semantic attributes through different semantic levels in the input images.
arXiv Detail & Related papers (2022-03-29T02:03:31Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.