Generating Topologically and Geometrically Diverse Manifold Data in Dimensions Four and Below
- URL: http://arxiv.org/abs/2410.07115v1
- Date: Fri, 20 Sep 2024 09:37:09 GMT
- Title: Generating Topologically and Geometrically Diverse Manifold Data in Dimensions Four and Below
- Authors: Khalil Mathieu Hannouch, Stephan Chalup,
- Abstract summary: Recent work has demonstrated that synthetic 4D image-type data can be useful to train 4D convolutional neural network models.
These models appear to tolerate the use of image preprocessing techniques where existing topological data analysis techniques such as persistent homology do not.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding the topological characteristics of data is important to many areas of research. Recent work has demonstrated that synthetic 4D image-type data can be useful to train 4D convolutional neural network models to see topological features in these data. These models also appear to tolerate the use of image preprocessing techniques where existing topological data analysis techniques such as persistent homology do not. This paper investigates how methods from algebraic topology, combined with image processing techniques such as morphology, can be used to generate topologically sophisticated and diverse-looking 2-, 3-, and 4D image-type data with topological labels in simulation. These approaches are illustrated in 2D and 3D with the aim of providing a roadmap towards achieving this in 4D.
Related papers
- Persistence Image from 3D Medical Image: Superpixel and Optimized Gaussian Coefficient [3.808587330262038]
Topological data analysis (TDA) uncovers crucial properties of objects in medical imaging.
Previous research primarily focused on 2D image analysis, neglecting the comprehensive 3D context.
We propose an innovative 3D TDA approach that incorporates the concept of superpixels to transform 3D medical image features into point cloud data.
arXiv Detail & Related papers (2024-08-15T03:24:00Z) - Using convolutional neural networks for stereological characterization
of 3D hetero-aggregates based on synthetic STEM data [0.0]
A parametric 3D model is presented, from which a wide spectrum of virtual hetero-aggregates can be generated.
The virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images.
Convolutional neural networks are trained to predict 3D structures of hetero-aggregates from 2D STEM images.
arXiv Detail & Related papers (2023-10-27T22:49:08Z) - Synthetic Data Generation and Deep Learning for the Topological Analysis
of 3D Data [0.0]
This research uses deep learning to estimate the topology of sparse, unordered point cloud scenes in 3D.
The experimental results of this pilot study support the hypothesis that, with the aid of sophisticated synthetic data generation, neural networks can perform segmentation-based topological data analysis.
arXiv Detail & Related papers (2023-09-29T04:37:35Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Capturing Shape Information with Multi-Scale Topological Loss Terms for
3D Reconstruction [7.323706635751351]
We propose to complement geometrical shape information by including multi-scale topological features, such as connected components, cycles, and voids, in the reconstruction loss.
Our method calculates topological features from 3D volumetric data based on cubical complexes and uses an optimal transport distance to guide the reconstruction process.
We demonstrate the utility of our loss by incorporating it into SHAPR, a model for predicting the 3D cell shape of individual cells based on 2D microscopy images.
arXiv Detail & Related papers (2022-03-03T13:18:21Z) - Predictive Geological Mapping with Convolution Neural Network Using
Statistical Data Augmentation on a 3D Model [0.0]
We develop a data augmentation workflow that uses a 3D geological and magnetic susceptibility model as input.
A Gated Shape Convolutional Neural Network algorithm was trained on a generated synthetic dataset to perform geological mapping.
The validation conducted on a portion of the synthetic dataset and data from adjacent areas shows that the methodology is suitable to segment the surficial geology.
arXiv Detail & Related papers (2021-10-27T13:56:40Z) - Magnifying Subtle Facial Motions for Effective 4D Expression Recognition [56.806738404887824]
The flow of 3D faces is first analyzed to capture the spatial deformations.
The obtained temporal evolution of these deformations are fed into a magnification method.
The latter, main contribution of this paper, allows revealing subtle (hidden) deformations which enhance the emotion classification performance.
arXiv Detail & Related papers (2021-05-05T20:47:43Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.