Karhunen-Lo\`eve Data Imputation in High Contrast Imaging
- URL: http://arxiv.org/abs/2308.16912v1
- Date: Thu, 31 Aug 2023 17:59:59 GMT
- Title: Karhunen-Lo\`eve Data Imputation in High Contrast Imaging
- Authors: Bin B. Ren
- Abstract summary: We propose the data imputation concept to Karhunen-Loeve transform (DIKL) by modifying two steps in the standard Karhunen-Loeve image projection method.
As an analytical approach, DIKL achieves high-quality results with significantly reduced computational cost.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection and characterization of extended structures is a crucial goal in
high contrast imaging. However, these structures face challenges in data
reduction, leading to over-subtraction from speckles and self-subtraction with
most existing methods. Iterative post-processing methods offer promising
results, but their integration into existing pipelines is hindered by selective
algorithms, high computational cost, and algorithmic regularization. To address
this for reference differential imaging (RDI), here we propose the data
imputation concept to Karhunen-Lo\`eve transform (DIKL) by modifying two steps
in the standard Karhunen-Lo\`eve image projection (KLIP) method. Specifically,
we partition an image to two matrices: an anchor matrix which focuses only on
the speckles to obtain the DIKL coefficients, and a boat matrix which focuses
on the regions of astrophysical interest for speckle removal using DIKL
components. As an analytical approach, DIKL achieves high-quality results with
significantly reduced computational cost (~3 orders of magnitude less than
iterative methods). Being a derivative method of KLIP, DIKL is seamlessly
integrable into high contrast imaging pipelines for RDI observations.
Related papers
- Denoising via Repainting: an image denoising method using layer wise medical image repainting [6.195127726026568]
We propose a multi-scale approach that integrates anisotropic Gaussian filtering and progressive Bezier-path redrawing.
Our method constructs a scale-space pyramid to mitigate noise while preserving structural details.
Empirical results on multiple MRI datasets demonstrate consistent improvements in PSNR and SSIM over competing methods.
arXiv Detail & Related papers (2025-03-11T06:54:37Z) - Learning Efficient and Effective Trajectories for Differential Equation-based Image Restoration [59.744840744491945]
In this paper, we reformulate the trajectory optimization of this kind of method, focusing on enhancing both reconstruction quality and efficiency.<n>To mitigate the considerable computational burden associated with iterative sampling, we propose cost-aware trajectory distillation.<n>We fine-tune a foundational diffusion model (FLUX) with 12B parameters by using our algorithms, producing a unified framework for handling 7 kinds of image restoration tasks.
arXiv Detail & Related papers (2024-10-07T07:46:08Z) - MODEL&CO: Exoplanet detection in angular differential imaging by learning across multiple observations [37.845442465099396]
Most post-processing methods build a model of the nuisances from the target observations themselves.
We propose to build the nuisance model from an archive of multiple observations by leveraging supervised deep learning techniques.
We apply the proposed algorithm to several datasets from the VLT/SPHERE instrument, and demonstrate a superior precision-recall trade-off.
arXiv Detail & Related papers (2024-09-23T09:22:45Z) - Partitioned Hankel-based Diffusion Models for Few-shot Low-dose CT Reconstruction [10.158713017984345]
We propose a few-shot low-dose CT reconstruction method using Partitioned Hankel-based Diffusion (PHD) models.
In the iterative reconstruction stage, an iterative differential equation solver is employed along with data consistency constraints to update the acquired projection data.
The results approximate those of normaldose counterparts, validating PHD model as an effective and practical model for reducing artifacts and noise while preserving image quality.
arXiv Detail & Related papers (2024-05-27T13:44:53Z) - Image-level Regression for Uncertainty-aware Retinal Image Segmentation [3.7141182051230914]
We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
arXiv Detail & Related papers (2024-05-27T04:17:10Z) - Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Improving Pixel-based MIM by Reducing Wasted Modeling Capability [77.99468514275185]
We propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction.
To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures.
Our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation.
arXiv Detail & Related papers (2023-08-01T03:44:56Z) - Difference of Anisotropic and Isotropic TV for Segmentation under Blur
and Poisson Noise [2.6381163133447836]
We adopt a smoothing-and-thresholding (SaT) segmentation framework that finds awise-smooth solution, followed by $k-means to segment the image.
Specifically for the image smoothing step, we replace the maximum noise in the MumfordShah model with a maximum variation of anisotropic total variation (AITV) as a regularization.
Convergence analysis is provided to validate the efficacy of the scheme.
arXiv Detail & Related papers (2023-01-06T01:14:56Z) - Towards Top-Down Just Noticeable Difference Estimation of Natural Images [65.14746063298415]
Just noticeable difference (JND) estimation mainly dedicates to modeling the visibility masking effects of different factors in spatial and frequency domains.
In this work, we turn to a dramatically different way to address these problems with a top-down design philosophy.
Our proposed JND model can achieve better performance than several latest JND models.
arXiv Detail & Related papers (2021-08-11T06:51:50Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Kullback-Leibler Divergence-Based Fuzzy $C$-Means Clustering
Incorporating Morphological Reconstruction and Wavelet Frames for Image
Segmentation [152.609322951917]
We come up with a Kullback-Leibler (KL) divergence-based Fuzzy C-Means (FCM) algorithm by incorporating a tight wavelet frame transform and a morphological reconstruction operation.
The proposed algorithm works well and comes with better segmentation performance than other comparative algorithms.
arXiv Detail & Related papers (2020-02-21T05:19:10Z) - Data Augmentation for Histopathological Images Based on
Gaussian-Laplacian Pyramid Blending [59.91656519028334]
Data imbalance is a major problem that affects several machine learning (ML) algorithms.
In this paper, we propose a novel approach capable of not only augmenting HI dataset but also distributing the inter-patient variability.
Experimental results on the BreakHis dataset have shown promising gains vis-a-vis the majority of DA techniques presented in the literature.
arXiv Detail & Related papers (2020-01-31T22:02:57Z) - A multistep segmentation algorithm for vessel extraction in medical imaging [0.3683202928838613]
We propose an iterative procedure for tubular structure segmentation of 2D images, which combines tight frame of Curvelet transforms with a SURE technique thresholding.<n>This proposed algorithm is mainly based on the TFA proposal presented in [1, 9], which we use eigenvectors of Hessian matrix of image for improving this iterative part in segmenting unclear and narrow vessels.<n>The experimental results are presented to demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2014-12-30T15:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.