Integrating Generative and Physics-Based Models for Ptychographic Imaging with Uncertainty Quantification
- URL: http://arxiv.org/abs/2412.10882v1
- Date: Sat, 14 Dec 2024 16:16:37 GMT
- Title: Integrating Generative and Physics-Based Models for Ptychographic Imaging with Uncertainty Quantification
- Authors: Canberk Ekmekci, Tekin Bicer, Zichao Wendy Di, Junjing Deng, Mujdat Cetin,
- Abstract summary: Ptychography is a scanning coherent diffractive imaging technique that enables imaging nanometer-scale features in extended samples.<n>This paper proposes a Bayesian inversion method for ptychography that performs effectively even with less overlap between neighboring scan locations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ptychography is a scanning coherent diffractive imaging technique that enables imaging nanometer-scale features in extended samples. One main challenge is that widely used iterative image reconstruction methods often require significant amount of overlap between adjacent scan locations, leading to large data volumes and prolonged acquisition times. To address this key limitation, this paper proposes a Bayesian inversion method for ptychography that performs effectively even with less overlap between neighboring scan locations. Furthermore, the proposed method can quantify the inherent uncertainty on the ptychographic object, which is created by the ill-posed nature of the ptychographic inverse problem. At a high level, the proposed method first utilizes a deep generative model to learn the prior distribution of the object and then generates samples from the posterior distribution of the object by using a Markov Chain Monte Carlo algorithm. Our results from simulated ptychography experiments show that the proposed framework can consistently outperform a widely used iterative reconstruction algorithm in cases of reduced overlap. Moreover, the proposed framework can provide uncertainty estimates that closely correlate with the true error, which is not available in practice. The project website is available here.
Related papers
- PtychoFormer: A Transformer-based Model for Ptychographic Phase Retrieval [9.425754476649796]
We present a hierarchical transformer-based model for data-driven single-shot ptychographic phase retrieval.
Our model exhibits tolerance to sparsely scanned diffraction patterns and achieves up to 3600 times faster imaging speed than the extended ptychographic iterative engine (ePIE)
arXiv Detail & Related papers (2024-10-22T19:26:05Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - PtychoDV: Vision Transformer-Based Deep Unrolling Network for
Ptychographic Image Reconstruction [12.780951605821238]
PtychoDV is a novel deep model-based network designed for efficient, high-quality ptychographic image reconstruction.
Results on simulated data demonstrate that PtychoDV is capable of outperforming existing deep learning methods for this problem.
arXiv Detail & Related papers (2023-10-11T14:01:36Z) - Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring [48.80983873199214]
We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
arXiv Detail & Related papers (2023-08-10T12:53:30Z) - A Deep Generative Approach to Oversampling in Ptychography [9.658250977094562]
A major drawback of ptychography is the long data acquisition time.
We propose complementing sparsely acquired or undersampled data with data sampled from a deep generative network.
Because the deep generative network is pre-trained and its output can be computed as we collect data, the experimental data and the time to acquire the data can be reduced.
arXiv Detail & Related papers (2022-07-28T22:02:01Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Compressive Ptychography using Deep Image and Generative Priors [9.658250977094562]
Ptychography is a well-established coherent diffraction imaging technique that enables non-invasive imaging of samples at a nanometer scale.
One major limitation of ptychography is the long data acquisition time due to mechanical scanning of the sample.
We propose a generative model combining deep image priors with deep generative priors.
arXiv Detail & Related papers (2022-05-05T02:18:26Z) - Mining the manifolds of deep generative models for multiple
data-consistent solutions of ill-posed tomographic imaging problems [10.115302976900445]
Tomographic imaging is in general an ill-posed inverse problem.
We propose a new empirical sampling method that computes multiple solutions of a tomographic inverse problem.
arXiv Detail & Related papers (2022-02-10T20:27:31Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.