Deep learning based dictionary learning and tomographic image
reconstruction
- URL: http://arxiv.org/abs/2108.11730v1
- Date: Thu, 26 Aug 2021 12:10:17 GMT
- Title: Deep learning based dictionary learning and tomographic image
reconstruction
- Authors: Jevgenija Rudzusika, Thomas Koehler, Ozan \"Oktem
- Abstract summary: This work presents an approach for image reconstruction in clinical low-dose tomography that combines principles from sparse signal processing with ideas from deep learning.
First, we describe sparse signal representation in terms of dictionaries from a statistical perspective and interpret dictionary learning as a process of aligning distribution that arises from a generative model with empirical distribution of true signals.
As a result we can see that sparse coding with learned dictionaries resembles a specific variational autoencoder, where the decoder is a linear function and the encoder is a sparse coding algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents an approach for image reconstruction in clinical low-dose
tomography that combines principles from sparse signal processing with ideas
from deep learning. First, we describe sparse signal representation in terms of
dictionaries from a statistical perspective and interpret dictionary learning
as a process of aligning distribution that arises from a generative model with
empirical distribution of true signals. As a result we can see that sparse
coding with learned dictionaries resembles a specific variational autoencoder,
where the decoder is a linear function and the encoder is a sparse coding
algorithm. Next, we show that dictionary learning can also benefit from
computational advancements introduced in the context of deep learning, such as
parallelism and as stochastic optimization. Finally, we show that
regularization by dictionaries achieves competitive performance in computed
tomography (CT) reconstruction comparing to state-of-the-art model based and
data driven approaches.
Related papers
- Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Dictionary Learning with Low-rank Coding Coefficients for Tensor
Completion [33.068635237011236]
Our model is to learn a data-adaptive dictionary from the given observations.
In the completion process, we minimize the low-rankness of each tensor slice containing the coding coefficients.
arXiv Detail & Related papers (2020-09-26T02:43:43Z) - Efficient and Parallel Separable Dictionary Learning [2.6905021039717987]
We describe a highly parallelizable algorithm that learns such dictionaries.
We highlight the performance of the proposed method to sparsely represent image and hyperspectral data, and for image denoising.
arXiv Detail & Related papers (2020-07-07T21:46:32Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Contrast-weighted Dictionary Learning Based Saliency Detection for
Remote Sensing Images [3.338193485961624]
We propose a novel saliency detection model based on Contrast-weighted Dictionary Learning (CDL) for remote sensing images.
Specifically, the proposed CDL learns salient and non-salient atoms from positive and negative samples to construct a discriminant dictionary.
By using the proposed joint saliency measure, a variety of saliency maps are generated based on the discriminant dictionary.
arXiv Detail & Related papers (2020-04-06T06:49:05Z) - Learning Representations by Predicting Bags of Visual Words [55.332200948110895]
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data.
Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions.
arXiv Detail & Related papers (2020-02-27T16:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.