Deformable Groupwise Image Registration using Low-Rank and Sparse
Decomposition
- URL: http://arxiv.org/abs/2001.03509v1
- Date: Fri, 10 Jan 2020 15:25:36 GMT
- Title: Deformable Groupwise Image Registration using Low-Rank and Sparse
Decomposition
- Authors: Roland Haase, Stefan Heldmann, Jan Lellmann
- Abstract summary: In this paper, we investigate the drawbacks of the most common RPCA-dissimi-larity metric in image registration.
We present a theoretically justified multilevel scheme based on first-order primal-dual optimization to solve the resulting non-parametric registration problem.
- Score: 0.23310144143158676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank and sparse decompositions and robust PCA (RPCA) are highly
successful techniques in image processing and have recently found use in
groupwise image registration. In this paper, we investigate the drawbacks of
the most common RPCA-dissimi\-larity metric in image registration and derive an
improved version. In particular, this new metric models low-rank requirements
through explicit constraints instead of penalties and thus avoids the pitfalls
of the established metric. Equipped with total variation regularization, we
present a theoretically justified multilevel scheme based on first-order
primal-dual optimization to solve the resulting non-parametric registration
problem. As confirmed by numerical experiments, our metric especially lends
itself to data involving recurring changes in object appearance and potential
sparse perturbations. We numerically compare its peformance to a number of
related approaches.
Related papers
- Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Nonnegative-Constrained Joint Collaborative Representation with Union
Dictionary for Hyperspectral Anomaly Detection [14.721615285883429]
collaborative representation-based (CR) algorithms have been proposed for hyperspectral anomaly detection.
CR-based detectors approximate the image by a linear combination of background dictionaries and the coefficient matrix, and derive the detection map by utilizing recovery residuals.
This paper proposes a nonnegative-constrained joint collaborative representation model for the hyperspectral anomaly detection task.
arXiv Detail & Related papers (2022-03-18T16:02:27Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Constraining Volume Change in Learned Image Registration for Lung CTs [4.37795447716986]
In this paper, we identify important strategies of conventional registration methods for lung registration and successfully developed the deep-learning counterpart.
We employ a Gaussian-pyramid-based multilevel framework that can solve the image registration optimization in a coarse-to-fine fashion.
We show that it archives state-of-the-art results on the COPDGene dataset compared to the challenge winning conventional registration method with much shorter execution time.
arXiv Detail & Related papers (2020-11-29T14:09:31Z) - Image Inpainting with Learnable Feature Imputation [8.293345261434943]
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image.
We propose (layer-wise) feature imputation of the missing input values to a convolution.
We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
arXiv Detail & Related papers (2020-11-02T16:05:32Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Weighted Encoding Based Image Interpolation With Nonlocal Linear
Regression Model [8.013127492678272]
In image super-resolution, the low-resolution image is directly down-sampled from its high-resolution counterpart without blurring and noise.
To address this problem, we propose a novel image model based on sparse representation.
New approach to learn adaptive sub-dictionary online instead of clustering.
arXiv Detail & Related papers (2020-03-04T03:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.