Collaborative Texture Filtering
- URL: http://arxiv.org/abs/2506.17770v1
- Date: Sat, 21 Jun 2025 17:46:57 GMT
- Title: Collaborative Texture Filtering
- Authors: Tomas Akenine-Möller, Pontus Ebelin, Matt Pharr, Bartlomiej Wronski,
- Abstract summary: Recent advances in texture compression provide major improvements in compression ratios, but cannot use the GPU's texture units for decompression and filtering.<n>We present novel algorithms that use wave communication between lanes to avoid repeated texel decompression prior to filtering.<n>By distributing unique work across lanes, we can achieve zero-error filtering using =1 texel evaluations per pixel given a sufficiently large magnification factor.
- Score: 1.7949335303516187
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in texture compression provide major improvements in compression ratios, but cannot use the GPU's texture units for decompression and filtering. This has led to the development of stochastic texture filtering (STF) techniques to avoid the high cost of multiple texel evaluations with such formats. Unfortunately, those methods can give undesirable visual appearance changes under magnification and may contain visible noise and flicker despite the use of spatiotemporal denoisers. Recent work substantially improves the quality of magnification filtering with STF by sharing decoded texel values between nearby pixels (Wronski 2025). Using GPU wave communication intrinsics, this sharing can be performed inside actively executing shaders without memory traffic overhead. We take this idea further and present novel algorithms that use wave communication between lanes to avoid repeated texel decompression prior to filtering. By distributing unique work across lanes, we can achieve zero-error filtering using <=1 texel evaluations per pixel given a sufficiently large magnification factor. For the remaining cases, we propose novel filtering fallback methods that also achieve higher quality than prior approaches.
Related papers
- Learning Deblurring Texture Prior from Unpaired Data with Diffusion Model [92.61216319417208]
We propose a novel diffusion model (DM)-based framework, dubbed ours, for image deblurring.<n>ours performs DM to generate the prior knowledge that aids in recovering the textures of blurry images.<n>To fully exploit the generated texture priors, we present the Texture Transfer Transformer layer (TTformer)
arXiv Detail & Related papers (2025-07-18T01:50:31Z) - Improved Stochastic Texture Filtering Through Sample Reuse [1.9608359347635143]
texture filtering (STF) has re-emerged as a technique that can bring down the cost of texture filtering of advanced texture compression methods.<n>During texture magnification, the swapped order of filtering and shading with STF can result in aliasing.<n>We present a novel method to improve the quality of textureally-filtered magnified textures and reduce the image difference.
arXiv Detail & Related papers (2025-04-07T23:28:52Z) - Scene Prior Filtering for Depth Super-Resolution [97.30137398361823]
We introduce a Scene Prior Filtering network, SPFNet, to mitigate texture interference and edge inaccuracy.
Our SPFNet has been extensively evaluated on both real and synthetic datasets, achieving state-of-the-art performance.
arXiv Detail & Related papers (2024-02-21T15:35:59Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Stochastic Texture Filtering [3.4202659118354104]
Filtered texture lookups are integral to producing high-quality imagery.
We show that filtering after evaluating lighting, rather than before BSDF evaluation as is current practice, gives a more accurate solution to the rendering equation.
We demonstrate applications in both real-time and offline rendering and show that the additional error is minimal.
arXiv Detail & Related papers (2023-05-09T23:50:25Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Image Restoration in Non-Linear Filtering Domain using MDB approach [0.0]
The aim of image enhancement is to reconstruct the true image from the corrupted image.
Image degradation can be due to the addition of different types of noise in the original image.
Impulse noise generates pixels with gray value not consistent with their local neighbourhood.
arXiv Detail & Related papers (2022-04-20T08:23:52Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Anti-aliasing Deep Image Classifiers using Novel Depth Adaptive Blurring
and Activation Function [7.888131635057012]
Deep convolutional networks are vulnerable to image translation or shift.
The textbook solution is low-pass filtering before down-sampling.
We show that Depth Adaptive Blurring is more effective, as opposed to monotonic blurring.
arXiv Detail & Related papers (2021-10-03T01:00:52Z) - How to Exploit the Transferability of Learned Image Compression to
Conventional Codecs [25.622863999901874]
We show how learned image coding can be used as a surrogate to optimize an image for encoding.
Our approach can remodel a conventional image to adjust for the MS-SSIM distortion with over 20% rate improvement without any decoding overhead.
arXiv Detail & Related papers (2020-12-03T12:34:51Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - Adaptive Debanding Filter [55.42929350861115]
Banding artifacts manifest as staircase-like color bands on pictures or video frames.
We propose a content-adaptive smoothing filtering followed by dithered quantization, as a post-processing module.
Experimental results show that our proposed debanding filter outperforms state-of-the-art false contour removing algorithms both visually and quantitatively.
arXiv Detail & Related papers (2020-09-22T20:44:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.