Computerized Tomography and Reproducing Kernels
- URL: http://arxiv.org/abs/2311.07465v2
- Date: Mon, 24 Jun 2024 20:42:24 GMT
- Title: Computerized Tomography and Reproducing Kernels
- Authors: Ho Yun, Victor M. Panaretos,
- Abstract summary: We consider the X-ray transform as an operator between Reproducing Kernel Hilbert Spaces.
Within this framework, the X-ray transform can be viewed as a natural analogue of Euclidean projection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The X-ray transform is one of the most fundamental integral operators in image processing and reconstruction. In this article, we revisit the formalism of the X-ray transform by considering it as an operator between Reproducing Kernel Hilbert Spaces (RKHS). Within this framework, the X-ray transform can be viewed as a natural analogue of Euclidean projection. The RKHS framework considerably simplifies projection image interpolation, and leads to an analogue of the celebrated representer theorem for the problem of tomographic reconstruction. It leads to methodology that is dimension-free and stands apart from conventional filtered back-projection techniques, as it does not hinge on the Fourier transform. It also allows us to establish sharp stability results at a genuinely functional level (i.e. without recourse to discretization), but in the realistic setting where the data are discrete and noisy. The RKHS framework is versatile, accommodating any reproducing kernel on a unit ball, affording a high level of generality. When the kernel is chosen to be rotation-invariant, explicit spectral representations can be obtained, elucidating the regularity structure of the associated Hilbert spaces. Moreover, the reconstruction problem can be solved at the same computational cost as filtered back-projection.
Related papers
- Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - Image denoising as a conditional expectation [0.0]
Most common techniques estimate the true image as a projection to some subspace.<n>We propose an interpretation of a noisy image as a collection of samples drawn from a certain probability space.<n>We present a data-driven denoising method in which the true image is recovered as a conditional expectation.
arXiv Detail & Related papers (2025-05-24T21:30:56Z) - Mixed-granularity Implicit Representation for Continuous Hyperspectral Compressive Reconstruction [16.975538181162616]
This study introduces a novel method using implicit neural representation for continuous hyperspectral image reconstruction.
By leveraging implicit neural representations, the MGIR framework enables reconstruction at any desired spatial-spectral resolution.
arXiv Detail & Related papers (2025-03-17T03:37:42Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - FunkNN: Neural Interpolation for Functional Generation [23.964801524703052]
FunkNN is a new convolutional network which learns to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset.
We show that FunkNN generates high-quality continuous images and exhibits strong out-of-distribution performance thanks to its patch-based design.
arXiv Detail & Related papers (2022-12-20T16:37:20Z) - Editing Out-of-domain GAN Inversion via Differential Activations [56.62964029959131]
We propose a novel GAN prior based editing framework to tackle the out-of-domain inversion problem with a composition-decomposition paradigm.
With the aid of the generated Diff-CAM mask, a coarse reconstruction can intuitively be composited by the paired original and edited images.
In the decomposition phase, we further present a GAN prior based deghosting network for separating the final fine edited image from the coarse reconstruction.
arXiv Detail & Related papers (2022-07-17T10:34:58Z) - Learning Local Implicit Fourier Representation for Image Warping [11.526109213908091]
We propose a local texture estimator for image warping (LTEW) followed by an implicit neural representation to deform images into continuous shapes.
Our LTEW-based neural function outperforms existing warping methods for asymmetric-scale SR and homography transform.
arXiv Detail & Related papers (2022-07-05T06:30:17Z) - Orthonormal Convolutions for the Rotation Based Iterative
Gaussianization [64.44661342486434]
This paper elaborates an extension of rotation-based iterative Gaussianization, RBIG, which makes image Gaussianization possible.
In images its application has been restricted to small image patches or isolated pixels, because rotation in RBIG is based on principal or independent component analysis.
We present the emphConvolutional RBIG: an extension that alleviates this issue by imposing that the rotation in RBIG is a convolution.
arXiv Detail & Related papers (2022-06-08T12:56:34Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Non-local Meets Global: An Iterative Paradigm for Hyperspectral Image
Restoration [66.68541690283068]
We propose a unified paradigm combining the spatial and spectral properties for hyperspectral image restoration.
The proposed paradigm enjoys performance superiority from the non-local spatial denoising and light computation complexity.
Experiments on HSI denoising, compressed reconstruction, and inpainting tasks, with both simulated and real datasets, demonstrate its superiority.
arXiv Detail & Related papers (2020-10-24T15:53:56Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.