TDiff: Thermal Plug-And-Play Prior with Patch-Based Diffusion
- URL: http://arxiv.org/abs/2510.06460v1
- Date: Tue, 07 Oct 2025 20:54:34 GMT
- Title: TDiff: Thermal Plug-And-Play Prior with Patch-Based Diffusion
- Authors: Piyush Dashpute, Niki Nezakati, Wolfgang Heidrich, Vishwanath Saragadam,
- Abstract summary: We propose a patch-based diffusion framework (TDiff) that leverages the local nature of these distortions by training on small thermal patches.<n>Full-resolution images are restored by denoising overlapping patches and blending them using smooth spatial windowing.<n>Experiments on denoising, super-resolution, and deblurring demonstrate strong results on both simulated and real thermal data.
- Score: 13.921428908649455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thermal images from low-cost cameras often suffer from low resolution, fixed pattern noise, and other localized degradations. Available datasets for thermal imaging are also limited in both size and diversity. To address these challenges, we propose a patch-based diffusion framework (TDiff) that leverages the local nature of these distortions by training on small thermal patches. In this approach, full-resolution images are restored by denoising overlapping patches and blending them using smooth spatial windowing. To our knowledge, this is the first patch-based diffusion framework that models a learned prior for thermal image restoration across multiple tasks. Experiments on denoising, super-resolution, and deblurring demonstrate strong results on both simulated and real thermal data, establishing our method as a unified restoration pipeline.
Related papers
- Latent Forcing: Reordering the Diffusion Trajectory for Pixel-Space Image Generation [36.41177812868683]
Latent diffusion models excel at generating high-quality images but lose the benefits of end-to-end modeling.<n>We propose Latent Forcing, a simple modification to existing architectures that achieves the efficiency of latent diffusion while operating on raw natural images.<n>Latent Forcing achieves a new state-of-the-art for diffusion transformer-based pixel generation at our compute scale.
arXiv Detail & Related papers (2026-02-11T22:09:58Z) - TIR-Diffusion: Diffusion-based Thermal Infrared Image Denoising via Latent and Wavelet Domain Optimization [11.970228442183476]
We propose a diffusion-based TIR image denoising framework.<n>Our method fine-tunes the model via a novel loss function combining latent-space and discrete wavelet transform (DWT) / dual-tree complex wavelet transform (DTCWT) losses.<n> Experiments on benchmark datasets demonstrate superior performance of our approach compared to state-of-the-art denoising methods.
arXiv Detail & Related papers (2025-07-30T06:27:32Z) - DoubleDiffusion: Combining Heat Diffusion with Denoising Diffusion for Texture Generation on 3D Meshes [67.39455433337316]
We propose a novel approach that directly generates texture on 3D meshes.<n>By integrating this technique into a generative diffusion pipeline, we significantly improve the efficiency of texture generation.
arXiv Detail & Related papers (2025-01-06T21:34:52Z) - Denoising Monte Carlo Renders with Diffusion Models [5.228564799458042]
Physically-based renderings contain Monte-Carlo noise, with variance that increases as the number of rays per pixel decreases.
This noise, while zero-mean for good moderns, can have heavy tails.
We demonstrate that a diffusion model can denoise low fidelity renders successfully.
arXiv Detail & Related papers (2024-03-30T23:19:40Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Real-World Denoising via Diffusion Model [14.722529440511446]
Real-world image denoising aims to recover clean images from noisy images captured in natural environments.
diffusion models have achieved very promising results in the field of image generation, outperforming previous generation models.
This paper proposes a novel general denoising diffusion model that can be used for real-world image denoising.
arXiv Detail & Related papers (2023-05-08T04:48:03Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models [64.24948495708337]
Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion.
Various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed.
Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images.
arXiv Detail & Related papers (2022-08-24T03:13:04Z) - Restoring Vision in Adverse Weather Conditions with Patch-Based
Denoising Diffusion Models [8.122270502556374]
We present a novel patch-based image restoration algorithm based on denoising diffusion probabilistic models.
We demonstrate our approach to achieve state-of-the-art performances on both weather-specific and multi-weather image restoration.
arXiv Detail & Related papers (2022-07-29T11:52:41Z) - Thermal to Visible Image Synthesis under Atmospheric Turbulence [67.99407460140263]
In biometrics and surveillance, thermal imagining modalities are often used to capture images in low-light and nighttime conditions.
Such imaging systems often suffer from atmospheric turbulence, which introduces severe blur and deformation artifacts to the captured images.
An end-to-end reconstruction method is proposed which can directly transform thermal images into visible-spectrum images.
arXiv Detail & Related papers (2022-04-06T19:47:41Z) - Learning to Restore a Single Face Image Degraded by Atmospheric
Turbulence using CNNs [93.72048616001064]
Images captured under such condition suffer from a combination of geometric deformation and space varying blur.
We present a deep learning-based solution to the problem of restoring a turbulence-degraded face image.
arXiv Detail & Related papers (2020-07-16T15:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.