Learning spatially adaptive sparsity level maps for arbitrary convolutional dictionaries
- URL: http://arxiv.org/abs/2602.21707v1
- Date: Wed, 25 Feb 2026 09:13:24 GMT
- Title: Learning spatially adaptive sparsity level maps for arbitrary convolutional dictionaries
- Authors: Joshua Schulz, David Schote, Christoph Kolbitsch, Kostas Papafitsoros, Andreas Kofler,
- Abstract summary: We build on a recently proposed image reconstruction method, which is based on embedding data-driven information into a model-based convolutional dictionary regularization.<n>We extend the method to achieve filter-permutation invariance as well as the possibility to change the convolutional dictionary at inference time.
- Score: 1.0243402599670037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art learned reconstruction methods often rely on black-box modules that, despite their strong performance, raise questions about their interpretability and robustness. Here, we build on a recently proposed image reconstruction method, which is based on embedding data-driven information into a model-based convolutional dictionary regularization via neural network-inferred spatially adaptive sparsity level maps. By means of improved network design and dedicated training strategies, we extend the method to achieve filter-permutation invariance as well as the possibility to change the convolutional dictionary at inference time. We apply our method to low-field MRI and compare it to several other recent deep learning-based methods, also on in vivo data, in which the benefit for the use of a different dictionary is showcased. We further assess the method's robustness when tested on in- and out-of-distribution data. When tested on the latter, the proposed method suffers less from the data distribution shift compared to the other learned methods, which we attribute to its reduced reliance on training data due to its underlying model-based reconstruction component.
Related papers
- Nonparametric Data Attribution for Diffusion Models [57.820618036556084]
Data attribution for generative models seeks to quantify the influence of individual training examples on model outputs.<n>We propose a nonparametric attribution method that operates entirely on data, measuring influence via patch-level similarity between generated and training images.
arXiv Detail & Related papers (2025-10-16T03:37:16Z) - Efficient compression of neural networks and datasets [0.0]
We compare, improve, and contribute methods that substantially decrease the number of parameters of neural networks.<n>When applying our methods to minimize description length, we obtain very effective data compression algorithms.<n>We empirically verify the prediction that regularized models can exhibit more sample-efficient convergence.
arXiv Detail & Related papers (2025-05-23T04:50:33Z) - Fuzzy Rule-based Differentiable Representation Learning [16.706014479049493]
This paper introduces a novel representation learning method grounded in an interpretable fuzzy rule-based model.<n>It is built upon the Takagi-Sugeno-Kang fuzzy system (TSK-FS) to initially map input data to a high-dimensional fuzzy feature space.<n>A novel differentiable optimization method is proposed for the consequence part learning which can preserve the model's interpretability and transparency.
arXiv Detail & Related papers (2025-03-16T14:00:34Z) - Time-series attribution maps with regularized contrastive learning [1.5503410315996757]
gradient-based attribution methods aim to explain decisions of deep learning models but so far lack identifiability guarantees.<n>Here, we propose a method to generate attribution maps with identifiability guarantees by developing a regularized contrastive learning algorithm trained on time-series data.<n>We show theoretically that xCEBRA has favorable properties for identifying the Jacobian matrix of the data generating process.
arXiv Detail & Related papers (2025-02-17T18:34:25Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Self-Supervised Learning for MRI Reconstruction with a Parallel Network
Training Framework [24.46388892324129]
The proposed method is flexible and can be employed in any existing deep learning-based method.
The effectiveness of the method is evaluated on an open brain MRI dataset.
arXiv Detail & Related papers (2021-09-26T06:09:56Z) - Graph Sampling Based Deep Metric Learning for Generalizable Person
Re-Identification [114.56752624945142]
We argue that the most popular random sampling method, the well-known PK sampler, is not informative and efficient for deep metric learning.
We propose an efficient mini batch sampling method called Graph Sampling (GS) for large-scale metric learning.
arXiv Detail & Related papers (2021-04-04T06:44:15Z) - Siloed Federated Learning for Multi-Centric Histopathology Datasets [0.17842332554022694]
This paper proposes a novel federated learning approach for deep learning architectures in the medical domain.
Local-statistic batch normalization (BN) layers are introduced, resulting in collaboratively-trained, yet center-specific models.
We benchmark the proposed method on the classification of tumorous histopathology image patches extracted from the Camelyon16 and Camelyon17 datasets.
arXiv Detail & Related papers (2020-08-17T15:49:30Z) - There and Back Again: Revisiting Backpropagation Saliency Methods [87.40330595283969]
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.
A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient.
We propose a single framework under which several such methods can be unified.
arXiv Detail & Related papers (2020-04-06T17:58:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.