Res-U2Net: Untrained Deep Learning for Phase Retrieval and Image Reconstruction
- URL: http://arxiv.org/abs/2404.06657v1
- Date: Tue, 9 Apr 2024 23:47:53 GMT
- Title: Res-U2Net: Untrained Deep Learning for Phase Retrieval and Image Reconstruction
- Authors: Carlos Osorio Quero, Daniel Leykam, Irving Rondon Ojeda,
- Abstract summary: We present a novel untrained Res-U2Net model for phase retrieval.
We use the extracted phase information to determine changes in an object's surface and generate a mesh representation of its 3D structure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conventional deep learning-based image reconstruction methods require a large amount of training data which can be hard to obtain in practice. Untrained deep learning methods overcome this limitation by training a network to invert a physical model of the image formation process. Here we present a novel untrained Res-U2Net model for phase retrieval. We use the extracted phase information to determine changes in an object's surface and generate a mesh representation of its 3D structure. We compare the performance of Res-U2Net phase retrieval against UNet and U2Net using images from the GDXRAY dataset.
Related papers
- Swap-Net: A Memory-Efficient 2.5D Network for Sparse-View 3D Cone Beam CT Reconstruction [13.891441371598546]
Reconstructing 3D cone beam computed tomography (CBCT) images from a limited set of projections is an inverse problem in many imaging applications.
This paper proposes Swap-Net, a memory-efficient 2.5D network for sparse-view 3D CBCT image reconstruction.
arXiv Detail & Related papers (2024-09-29T08:36:34Z) - SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction [2.2954246824369218]
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis.
We propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues.
arXiv Detail & Related papers (2023-09-06T19:30:22Z) - MPT: Mesh Pre-Training with Transformers for Human Pose and Mesh
Reconstruction [56.80384196339199]
Mesh Pre-Training (MPT) is a new pre-training framework that leverages 3D mesh data such as MoCap data for human pose and mesh reconstruction from a single image.
MPT enables transformer models to have zero-shot capability of human mesh reconstruction from real images.
arXiv Detail & Related papers (2022-11-24T00:02:13Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - Deep Artifact-Free Residual Network for Single Image Super-Resolution [0.2399911126932526]
We propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target.
Our framework uses a deep model to extract the high-frequency information which is necessary for high-quality image reconstruction.
Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.
arXiv Detail & Related papers (2020-09-25T20:53:55Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.