Iterative Predictor-Critic Code Decoding for Real-World Image Dehazing
- URL: http://arxiv.org/abs/2503.13147v2
- Date: Sat, 29 Mar 2025 06:25:23 GMT
- Title: Iterative Predictor-Critic Code Decoding for Real-World Image Dehazing
- Authors: Jiayi Fu, Siyu Liu, Zikun Liu, Chun-Le Guo, Hyunhee Park, Ruiqi Wu, Guoqing Wang, Chongyi Li,
- Abstract summary: We propose a novel Iterative Predictor-Critic Code Decoding framework for real-world image dehazing, abbreviated as IPC-Dehaze.<n>Our method utilizes high-quality codes obtained in the previous iteration to guide the prediction of the Code-Predictor in the subsequent iteration.
- Score: 30.834087480652194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel Iterative Predictor-Critic Code Decoding framework for real-world image dehazing, abbreviated as IPC-Dehaze, which leverages the high-quality codebook prior encapsulated in a pre-trained VQGAN. Apart from previous codebook-based methods that rely on one-shot decoding, our method utilizes high-quality codes obtained in the previous iteration to guide the prediction of the Code-Predictor in the subsequent iteration, improving code prediction accuracy and ensuring stable dehazing performance. Our idea stems from the observations that 1) the degradation of hazy images varies with haze density and scene depth, and 2) clear regions play crucial cues in restoring dense haze regions. However, it is non-trivial to progressively refine the obtained codes in subsequent iterations, owing to the difficulty in determining which codes should be retained or replaced at each iteration. Another key insight of our study is to propose Code-Critic to capture interrelations among codes. The Code-Critic is used to evaluate code correlations and then resample a set of codes with the highest mask scores, i.e., a higher score indicates that the code is more likely to be rejected, which helps retain more accurate codes and predict difficult ones. Extensive experiments demonstrate the superiority of our method over state-of-the-art methods in real-world dehazing.
Related papers
- Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook Size [0.0]
Vector Quantization (VQ) techniques face challenges in codebook utilization, limiting reconstruction fidelity in image modeling.<n>We introduce a Dual Codebook mechanism that effectively addresses this limitation by partitioning the representation into complementary global and local components.<n>Our approach achieves significant FID improvements across diverse image domains, particularly excelling in scene and face reconstruction tasks.
arXiv Detail & Related papers (2025-03-13T19:31:18Z) - Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary Codes [84.0257274213152]
Iterative bit flipping decoders are an efficient choice for sparse $(v,w)$-regular codes.<n>We propose concrete criteria for threshold determination, backed by a closed form model.
arXiv Detail & Related papers (2025-01-23T17:38:22Z) - Gumbel-Softmax Discretization Constraint, Differentiable IDS Channel, and an IDS-Correcting Code for DNA Storage [1.4272256806865107]
We present an autoencoder-based method, THEA-code, aimed at efficiently generating IDS-correcting codes for complex IDS channels.<n>A Gumbel-Softmax discretization constraint is proposed to discretize the features of the autoencoder.<n>A simulated differentiable IDS channel is developed as a differentiable alternative for IDS operations.
arXiv Detail & Related papers (2024-07-10T06:52:56Z) - Collective Bit Flipping-Based Decoding of Quantum LDPC Codes [0.6554326244334866]
We improve both the error correction performance and decoding latency of variable degree-3 (dv-3) QLDPC codes under iterative decoding.
Our decoding scheme is based on applying a modified version of bit flipping (BF) decoding, namely two-bit bit flipping (TBF) decoding.
arXiv Detail & Related papers (2024-06-24T18:51:48Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization [73.52943587514386]
Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm.
We propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image regions into variable-length codes based their information densities for accurate representation.
arXiv Detail & Related papers (2023-05-19T14:56:05Z) - Soft-Labeled Contrastive Pre-training for Function-level Code
Representation [127.71430696347174]
We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
arXiv Detail & Related papers (2022-10-18T05:17:37Z) - Towards Robust Blind Face Restoration with Codebook Lookup Transformer [94.48731935629066]
Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance.
We show that a learned discrete codebook prior in a small proxy space cast blind face restoration as a code prediction task.
We propose a Transformer-based prediction network, named CodeFormer, to model global composition and context of the low-quality faces.
arXiv Detail & Related papers (2022-06-22T17:58:01Z) - Addressing Leakage in Self-Supervised Contextualized Code Retrieval [3.693362838682697]
We address contextualized code retrieval, the search for code snippets helpful to fill gaps in a partial input program.
Our approach facilitates a large-scale self-supervised contrastive training by splitting source code randomly into contexts and targets.
To combat leakage between the two, we suggest a novel approach based on mutual identifier masking, dedentation, and the selection of syntax-aligned targets.
arXiv Detail & Related papers (2022-04-17T12:58:38Z) - Deep Learning to Ternary Hash Codes by Continuation [8.920717493647121]
We propose to jointly learn the features with the codes by appending a smoothed function to the networks.
Experiments show that the generated codes indeed could achieve higher retrieval accuracy.
arXiv Detail & Related papers (2021-07-16T16:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.