Euler Characteristic Transform Based Topological Loss for Reconstructing
3D Images from Single 2D Slices
- URL: http://arxiv.org/abs/2303.05286v1
- Date: Wed, 8 Mar 2023 02:12:17 GMT
- Title: Euler Characteristic Transform Based Topological Loss for Reconstructing
3D Images from Single 2D Slices
- Authors: Kalyan Varma Nadimpalli, Amit Chattopadhyay and Bastian Rieck
- Abstract summary: We propose a novel topological loss function based on the Euler Characteristic Transform.
This loss can be used as an inductive bias to aid the optimization of any neural network toward better reconstructions in the regime of limited data.
We show the effectiveness of the proposed loss function by incorporating it into SHAPR, a state-of-the-art shape reconstruction model.
- Score: 9.646922337783137
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The computer vision task of reconstructing 3D images, i.e., shapes, from
their single 2D image slices is extremely challenging, more so in the regime of
limited data. Deep learning models typically optimize geometric loss functions,
which may lead to poor reconstructions as they ignore the structural properties
of the shape. To tackle this, we propose a novel topological loss function
based on the Euler Characteristic Transform. This loss can be used as an
inductive bias to aid the optimization of any neural network toward better
reconstructions in the regime of limited data. We show the effectiveness of the
proposed loss function by incorporating it into SHAPR, a state-of-the-art shape
reconstruction model, and test it on two benchmark datasets, viz., Red Blood
Cells and Nuclei datasets. We also show a favourable property, namely
injectivity and discuss the stability of the topological loss function based on
the Euler Characteristic Transform.
Related papers
- T-Pixel2Mesh: Combining Global and Local Transformer for 3D Mesh Generation from a Single Image [84.08705684778666]
We propose a novel Transformer-boosted architecture, named T-Pixel2Mesh, inspired by the coarse-to-fine approach of P2M.
Specifically, we use a global Transformer to control the holistic shape and a local Transformer to refine the local geometry details.
Our experiments on ShapeNet demonstrate state-of-the-art performance, while results on real-world data show the generalization capability.
arXiv Detail & Related papers (2024-03-20T15:14:22Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - Neural Textured Deformable Meshes for Robust Analysis-by-Synthesis [17.920305227880245]
Our paper formulates triple vision tasks in a consistent manner using approximate analysis-by-synthesis.
We show that our analysis-by-synthesis is much more robust than conventional neural networks when evaluated on real-world images.
arXiv Detail & Related papers (2023-05-31T18:45:02Z) - Curvature regularization for Non-line-of-sight Imaging from
Under-sampled Data [5.591221518341613]
Non-line-of-sight (NLOS) imaging aims to reconstruct the three-dimensional hidden scenes from the data measured in the line-of-sight.
We propose novel NLOS reconstruction models based on curvature regularization.
We evaluate the proposed algorithms on both synthetic and real datasets.
arXiv Detail & Related papers (2023-01-01T14:10:43Z) - Capturing Shape Information with Multi-Scale Topological Loss Terms for
3D Reconstruction [7.323706635751351]
We propose to complement geometrical shape information by including multi-scale topological features, such as connected components, cycles, and voids, in the reconstruction loss.
Our method calculates topological features from 3D volumetric data based on cubical complexes and uses an optimal transport distance to guide the reconstruction process.
We demonstrate the utility of our loss by incorporating it into SHAPR, a model for predicting the 3D cell shape of individual cells based on 2D microscopy images.
arXiv Detail & Related papers (2022-03-03T13:18:21Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.