Better Than Ground-truth? Beyond Supervised Learning for Photoacoustic
Imaging Reconstruction
- URL: http://arxiv.org/abs/2012.02472v2
- Date: Mon, 21 Dec 2020 01:54:11 GMT
- Title: Better Than Ground-truth? Beyond Supervised Learning for Photoacoustic
Imaging Reconstruction
- Authors: Hengrong Lan, Changchun Yang, Feng Gao, and Fei Gao
- Abstract summary: Photoacoustic computed tomography (PACT) reconstructs the initial pressure distribution from raw PA signals.
Recently, supervised deep learning has been used to overcome limited-view problem that requires ground-truth.
We propose a beyond supervised reconstruction framework (BSR-Net) based on deep learning to compensate the limited-view issue.
- Score: 4.748104083612737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photoacoustic computed tomography (PACT) reconstructs the initial pressure
distribution from raw PA signals. Standard reconstruction always induces
artifacts using limited-view signals, which are influenced by limited angle
coverage of transducers, finite bandwidth, and uncertain heterogeneous
biological tissue. Recently, supervised deep learning has been used to overcome
limited-view problem that requires ground-truth. However, even full-view
sampling still induces artifacts that cannot be used to train the model. It
causes a dilemma that we could not acquire perfect ground-truth in practice. To
reduce the dependence on the quality of ground-truth, in this paper, for the
first time, we propose a beyond supervised reconstruction framework (BSR-Net)
based on deep learning to compensate the limited-view issue by feeding
limited-view position-wise data. A quarter position-wise data is fed into model
and outputs a group full-view data. Specifically, our method introduces a
residual structure, which generates beyond supervised reconstruction result,
whose artifacts are drastically reduced in the output compared to ground-truth.
Moreover, two novel losses are designed to restrain the artifacts. The
numerical and in-vivo results have demonstrated the performance of our method
to reconstruct the full-view image without artifacts.
Related papers
- Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI Reconstruction [48.30341580103962]
We propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues.
We design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction.
Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.
arXiv Detail & Related papers (2025-01-07T12:29:32Z) - Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection [58.87142367781417]
A naively trained detector tends to favor overfitting to the limited and monotonous fake patterns, causing the feature space to become highly constrained and low-ranked.
One potential remedy is incorporating the pre-trained knowledge within the vision foundation models to expand the feature space.
By freezing the principal components and adapting only the remained components, we preserve the pre-trained knowledge while learning forgery-related patterns.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - BFRFormer: Transformer-based generator for Real-World Blind Face
Restoration [37.77996097891398]
We propose a Transformer-based blind face restoration method, named BFRFormer, to reconstruct images with more identity-preserved details in an end-to-end manner.
Our method outperforms state-of-the-art methods on a synthetic dataset and four real-world datasets.
arXiv Detail & Related papers (2024-02-29T02:31:54Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Source-Free Domain Adaptation for Real-world Image Dehazing [10.26945164141663]
We present a novel Source-Free Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm.
We devise the Domain Representation Normalization (DRN) module to make the representation of real hazy domain features match that of the synthetic domain.
With our plug-and-play DRN module, unlabeled real hazy images can adapt existing well-trained source networks.
arXiv Detail & Related papers (2022-07-14T03:37:25Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Contrastive Feature Loss for Image Prediction [55.373404869092866]
Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.
We introduce an information theory based approach to measuring similarity between two images.
We show that our formulation boosts the perceptual realism of output images when used as a drop-in replacement for the L1 loss.
arXiv Detail & Related papers (2021-11-12T20:39:52Z) - 2-Step Sparse-View CT Reconstruction with a Domain-Specific Perceptual
Network [14.577323946585755]
We present a novel framework for sparse-view tomography by decoupling the reconstruction into two steps.
The intermediate result allows for a closed-form tomographic reconstruction with preserved details and highly reduced streak-artifacts.
Second, a refinement network, PRN, trained on the reconstructions reduces any remaining artifacts.
arXiv Detail & Related papers (2020-12-08T21:16:43Z) - Implicit Subspace Prior Learning for Dual-Blind Face Restoration [66.67059961379923]
A novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration.
Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods.
arXiv Detail & Related papers (2020-10-12T08:04:24Z) - Limited View Tomographic Reconstruction Using a Deep Recurrent Framework
with Residual Dense Spatial-Channel Attention Network and Sinogram
Consistency [25.16002539710169]
We propose a novel recurrent reconstruction framework that stacks the same block multiple times.
We develop a sinogram consistency layer interleaved in our recurrent framework to ensure that the sampled sinogram is consistent with the sinogram of the intermediate outputs of the recurrent blocks.
Our algorithm achieves a consistent and significant improvement over the existing state-of-the-art neural methods on both limited angle reconstruction and sparse view reconstruction.
arXiv Detail & Related papers (2020-09-03T16:39:48Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.