Differentiable SVD based on Moore-Penrose Pseudoinverse for Inverse Imaging Problems
- URL: http://arxiv.org/abs/2411.14141v1
- Date: Thu, 21 Nov 2024 14:04:38 GMT
- Title: Differentiable SVD based on Moore-Penrose Pseudoinverse for Inverse Imaging Problems
- Authors: Yinghao Zhang, Yue Hu,
- Abstract summary: We show that the non-differentiability of singular value decomposition is essentially due to an underdetermined system of linear equations.
We utilize the Moore-Penrose pseudoinverse to solve the system, thereby proposing a differentiable SVD.
Experimental results in color image compressed sensing and dynamic MRI reconstruction show that our proposed differentiable SVD can effectively address the numerical instability issue.
- Score: 7.466874119963763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank regularization-based deep unrolling networks have achieved remarkable success in various inverse imaging problems (IIPs). However, the singular value decomposition (SVD) is non-differentiable when duplicated singular values occur, leading to severe numerical instability during training. In this paper, we propose a differentiable SVD based on the Moore-Penrose pseudoinverse to address this issue. To the best of our knowledge, this is the first work to provide a comprehensive analysis of the differentiability of the trivial SVD. Specifically, we show that the non-differentiability of SVD is essentially due to an underdetermined system of linear equations arising in the derivation process. We utilize the Moore-Penrose pseudoinverse to solve the system, thereby proposing a differentiable SVD. A numerical stability analysis in the context of IIPs is provided. Experimental results in color image compressed sensing and dynamic MRI reconstruction show that our proposed differentiable SVD can effectively address the numerical instability issue while ensuring computational precision. Code is available at https://github.com/yhao-z/SVD-inv.
Related papers
- SVD-NO: Learning PDE Solution Operators with SVD Integral Kernels [35.16133249685271]
We present SVD-NO, a neural operator that parameterizes the kernel by its singular-value decomposition (SVD) and then carries out the integral directly in the low-rank basis.<n>As SVD-NO approximates the full kernel, it obtains a high de- gree of expressivity.
arXiv Detail & Related papers (2025-11-13T07:02:05Z) - Post-processing for Fair Regression via Explainable SVD [6.882042556551613]
We propose a linear transformation of the weight matrix, whereby the singular values derived from the SVD correspond to the differences in the first and second moments of the output distributions across two groups.
We analytically solve the problem of finding the optimal weights under these constraints.
Experimental validation on various datasets demonstrates that our method achieves a similar or superior fairness-accuracy trade-off compared to the baselines.
arXiv Detail & Related papers (2025-04-04T00:10:01Z) - AdaSVD: Adaptive Singular Value Decomposition for Large Language Models [84.60646883395454]
Singular Value Decomposition (SVD) has emerged as a promising compression technique for large language models (LLMs)
Existing SVD-based methods often struggle to effectively mitigate the errors introduced by SVD truncation.
We propose AdaSVD, an adaptive SVD-based LLM compression approach.
arXiv Detail & Related papers (2025-02-03T14:34:37Z) - OTLRM: Orthogonal Learning-based Low-Rank Metric for Multi-Dimensional Inverse Problems [14.893020063373022]
We introduce a novel data-driven generative low-rank t-SVD model based on the learnable orthogonal transform.
We also propose a low-rank solver as a generalization of SVT, which utilizes an efficient representation of generative networks to obtain low-rank structures.
arXiv Detail & Related papers (2024-12-15T12:28:57Z) - Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method [21.16129116282759]
We introduce a new asymmetric learning paradigm based on coupled covariance eigenproblem (CCE)
We formalize the asymmetric Nystr"om method through a finite sample approximation to speed up training.
arXiv Detail & Related papers (2024-06-13T02:12:18Z) - On the Noise Sensitivity of the Randomized SVD [8.98526174345299]
The randomized singular value decomposition (R-SVD) is a popular sketching-based algorithm for efficiently computing the partial SVD of a large matrix.
We analyze the R-SVD under a low-rank signal plus noise measurement model.
The singular values produced by the R-SVD are shown to exhibit a BBP-like phase transition.
arXiv Detail & Related papers (2023-05-27T10:15:17Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Numerical Optimizations for Weighted Low-rank Estimation on Language
Model [73.12941276331316]
Singular value decomposition (SVD) is one of the most popular compression methods that approximates a target matrix with smaller matrices.
Standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
We show that our method can perform better than current SOTA methods in neural-based language models.
arXiv Detail & Related papers (2022-11-02T00:58:02Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Accurate and fast matrix factorization for low-rank learning [4.435094091999926]
We tackle two important challenges related to the accurate partial singular value decomposition (SVD) and numerical rank estimation of a huge matrix.
We use the concepts of Krylov subspaces such as the Golub-Kahan bidiagonalization process as well as Ritz vectors to achieve these goals.
arXiv Detail & Related papers (2021-04-21T22:35:02Z) - Robust Differentiable SVD [117.35644933471401]
Eigendecomposition of symmetric matrices is at the heart of many computer vision algorithms.
Instability arises in the presence of eigenvalues that are close to each other.
We show that the Taylor expansion of the SVD gradient is theoretically equivalent to the gradient obtained using PI without relying on an iterative process.
arXiv Detail & Related papers (2021-04-08T15:04:15Z) - Neural Decomposition: Functional ANOVA with Variational Autoencoders [9.51828574518325]
Variational Autoencoders (VAEs) have become a popular approach for dimensionality reduction.
Due to the black-box nature of VAEs, their utility for healthcare and genomics applications has been limited.
We focus on characterising the sources of variation in Conditional VAEs.
arXiv Detail & Related papers (2020-06-25T10:29:13Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.