Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation
- URL: http://arxiv.org/abs/2502.16532v2
- Date: Thu, 06 Mar 2025 12:19:59 GMT
- Title: Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation
- Authors: Thanh Trung Vu, Andreas Kofler, Kostas Papafitsoros,
- Abstract summary: The framework combines a deep convolutional neural network (CNN) inferring the two spatially varying TGV parameters with an unrolled scheme that solves the corresponding variational problem.<n> Numerical results in image denoising and MRI reconstruction show a significant qualitative and quantitative improvement compared to the best TGV scalar parameter case.<n>In particular, the parameter that weighs the first-order TGV term has a triple-edge structure with alternating high-low-high values whereas the one that weighs the second-order term attains small values in a large neighbourhood around the edges.
- Score: 0.393259574660092
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We extend a recently introduced deep unrolling framework for learning spatially varying regularisation parameters in inverse imaging problems to the case of Total Generalised Variation (TGV). The framework combines a deep convolutional neural network (CNN) inferring the two spatially varying TGV parameters with an unrolled algorithmic scheme that solves the corresponding variational problem. The two subnetworks are jointly trained end-to-end in a supervised fashion and as such the CNN learns to compute those parameters that drive the reconstructed images as close to the ground truth as possible. Numerical results in image denoising and MRI reconstruction show a significant qualitative and quantitative improvement compared to the best TGV scalar parameter case as well as to other approaches employing spatially varying parameters computed by unsupervised methods. We also observe that the inferred spatially varying parameter maps have a consistent structure near the image edges, asking for further theoretical investigations. In particular, the parameter that weighs the first-order TGV term has a triple-edge structure with alternating high-low-high values whereas the one that weighs the second-order term attains small values in a large neighbourhood around the edges.
Related papers
- Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.
To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.
We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - Learning Spatially Adaptive $\ell_1$-Norms Weights for Convolutional Synthesis Regularization [1.1699566743796068]
We consider a family of pre-trained convolutional filters and estimate deeply parametrized spatially varying parameters applied to sparse feature maps.
We show that the proposed approach produces visually and quantitatively comparable results with the latter approaches and at the same time remains highly interpretable.
arXiv Detail & Related papers (2025-03-12T15:38:11Z) - Hyperboloid GPLVM for Discovering Continuous Hierarchies via Nonparametric Estimation [41.13597666007784]
Dimensionality reduction (DR) offers a useful representation of complex high-dimensional data.
Recent DR methods focus on hyperbolic geometry to derive a faithful low-dimensional representation of hierarchical data.
This paper presents hGP-LVMs to embed high-dimensional hierarchical data with implicit continuity via nonparametric estimation.
arXiv Detail & Related papers (2024-10-22T05:07:30Z) - Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Maintaining Structural Integrity in Parameter Spaces for Parameter Efficient Fine-tuning [78.39310274926535]
Adapting pre-trained foundation models for various downstream tasks has been prevalent in artificial intelligence.<n>To mitigate this, several fine-tuning techniques have been developed to update the pre-trained model weights in a more resource-efficient manner.<n>This paper introduces a generalized parameter-efficient fine-tuning framework, designed for various dimensional parameter space.
arXiv Detail & Related papers (2024-05-23T16:04:42Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Bayesian multiscale deep generative model for the solution of
high-dimensional inverse problems [0.0]
A novel multiscale Bayesian inference approach is introduced based on deep probabilistic generative models.
The method allows high-dimensional parameter estimation while exhibiting stability, efficiency and accuracy.
arXiv Detail & Related papers (2021-02-04T11:47:21Z) - The Heavy-Tail Phenomenon in SGD [7.366405857677226]
We show that depending on the structure of the Hessian of the loss at the minimum, the SGD iterates will converge to a emphheavy-tailed stationary distribution.
We translate our results into insights about the behavior of SGD in deep learning.
arXiv Detail & Related papers (2020-06-08T16:43:56Z) - Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping [0.0]
ESTATICS performs a joint loglinear fit of multiple echo series to extract R2* and multiple extrapolated intercepts.
We evaluate the proposed algorithm by predicting left-out echoes in a rich single-subject dataset.
arXiv Detail & Related papers (2020-05-28T19:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.