Clean Implicit 3D Structure from Noisy 2D STEM Images
- URL: http://arxiv.org/abs/2203.15434v1
- Date: Tue, 29 Mar 2022 11:00:28 GMT
- Title: Clean Implicit 3D Structure from Noisy 2D STEM Images
- Authors: Hannah Kniesel, Timo Ropinski, Tim Bergner, Kavitha Shaga Devan,
Clarissa Read, Paul Walther, Tobias Ritschel and Pedro Hermosilla
- Abstract summary: We show that a differentiable image formation model for STEM can learn a joint model of 2D sensor noise in STEM together with an implicit 3D model.
We show, that the combination of these models are able to successfully disentangle 3D signal and noise without supervision and outperform at the same time several baselines on synthetic and real data.
- Score: 19.04251929587417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scanning Transmission Electron Microscopes (STEMs) acquire 2D images of a 3D
sample on the scale of individual cell components. Unfortunately, these 2D
images can be too noisy to be fused into a useful 3D structure and facilitating
good denoisers is challenging due to the lack of clean-noisy pairs.
Additionally, representing a detailed 3D structure can be difficult even for
clean data when using regular 3D grids. Addressing these two limitations, we
suggest a differentiable image formation model for STEM, allowing to learn a
joint model of 2D sensor noise in STEM together with an implicit 3D model. We
show, that the combination of these models are able to successfully disentangle
3D signal and noise without supervision and outperform at the same time several
baselines on synthetic and real data.
Related papers
- SYM3D: Learning Symmetric Triplanes for Better 3D-Awareness of GANs [5.84660008137615]
We propose SYM3D, a novel 3D-aware GAN designed to leverage the prevalental symmetry structure found in natural and man-made objects.
We evaluate SYM3D on both synthetic (ShapeNet Chairs, Cars, and Airplanes) and real-world datasets.
arXiv Detail & Related papers (2024-06-10T16:24:07Z) - Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior [57.986512832738704]
We present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
Specifically, we demonstrate that high-quality and diverse 3D geometry can be guaranteed by keypoints supervision through a sparse ray sampling approach.
These two decoupled designs effectively harness 3D information from reference objects to generate 3D objects while preserving the generation quality of the 2D diffusion model.
arXiv Detail & Related papers (2024-03-14T07:39:59Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Likelihood-Based Generative Radiance Field with Latent Space
Energy-Based Model for 3D-Aware Disentangled Image Representation [43.41596483002523]
We propose a likelihood-based top-down 3D-aware 2D image generative model that incorporates 3D representation via Neural Radiance Fields (NeRF) and 2D imaging process via differentiable volume rendering.
Experiments on several benchmark datasets demonstrate that the NeRF-LEBM can infer 3D object structures from 2D images, generate 2D images with novel views and objects, learn from incomplete 2D images, and learn from 2D images with known or unknown camera poses.
arXiv Detail & Related papers (2023-04-16T23:44:41Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - 3inGAN: Learning a 3D Generative Model from Images of a Self-similar
Scene [34.2144933185175]
3inGAN is an unconditional 3D generative model trained from 2D images of a single self-similar 3D scene.
We show results on semi-stochastic scenes of varying scale and complexity, obtained from real and synthetic sources.
arXiv Detail & Related papers (2022-11-27T18:03:21Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - Improving 3D-aware Image Synthesis with A Geometry-aware Discriminator [68.0533826852601]
3D-aware image synthesis aims at learning a generative model that can render photo-realistic 2D images while capturing decent underlying 3D shapes.
Existing methods fail to obtain moderate 3D shapes.
We propose a geometry-aware discriminator to improve 3D-aware GANs.
arXiv Detail & Related papers (2022-09-30T17:59:37Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Three-dimensional Microstructural Image Synthesis from 2D Backscattered Electron Image of Cement Paste [10.632881687161762]
A framework (CEM3DMG) is designed to synthesize 3D images by learning microstructural information from a 2D backscattered electron (BSE) image.
Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including pores and particles morphology.
arXiv Detail & Related papers (2022-04-04T16:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.