Consistency Flow Model Achieves One-step Denoising Error Correction Codes
- URL: http://arxiv.org/abs/2512.01389v1
- Date: Mon, 01 Dec 2025 08:07:51 GMT
- Title: Consistency Flow Model Achieves One-step Denoising Error Correction Codes
- Authors: Haoyu Lei, Chin Wa Lau, Kaiwen Zhou, Nian Guo, Farzan Farnia,
- Abstract summary: We introduce the Error Correction Consistency Flow Model (ECCFM) for high-fidelity one-step decoding.<n>ECCFM attains lower bit-error rates (BER) than autoregressive and diffusion-based baselines.<n>It delivers inference speeds up from 30x to 100x faster than denoising diffusion decoders.
- Score: 28.89866643527586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Error Correction Codes (ECC) are fundamental to reliable digital communication, yet designing neural decoders that are both accurate and computationally efficient remains challenging. Recent denoising diffusion decoders with transformer backbones achieve state-of-the-art performance, but their iterative sampling limits practicality in low-latency settings. We introduce the Error Correction Consistency Flow Model (ECCFM), an architecture-agnostic training framework for high-fidelity one-step decoding. By casting the reverse denoising process as a Probability Flow Ordinary Differential Equation (PF-ODE) and enforcing smoothness through a differential time regularization, ECCFM learns to map noisy signals along the decoding trajectory directly to the original codeword in a single inference step. Across multiple decoding benchmarks, ECCFM attains lower bit-error rates (BER) than autoregressive and diffusion-based baselines, with notable improvements on longer codes, while delivering inference speeds up from 30x to 100x faster than denoising diffusion decoders.
Related papers
- Land-then-transport: A Flow Matching-Based Generative Decoder for Wireless Image Transmission [38.71668959954467]
We propose a flow-matching generative decoder for low-latency decoding.<n>Experiments show consistent gains over JPEG2000+LDPC, DeepJSCC, and diffusion-based baselines.<n> LTT provides a deterministic, physically interpretable, and efficient framework for generative wireless image decoding.
arXiv Detail & Related papers (2026-01-12T13:09:37Z) - Learning Binary Autoencoder-Based Codes with Progressive Training [1.8620637029128544]
Autoencoder (AE) based approaches have gained attention for the end-to-end design of communication systems.<n>These results indicate that compact AE architectures can effectively learn structured, algebraically optimal binary codes through stable and straightforward training.
arXiv Detail & Related papers (2025-11-12T11:32:03Z) - Decoding quantum low density parity check codes with diffusion [0.5855198111605814]
We introduce a diffusion model framework to infer logical errors from syndrome measurements in quantum low-density parity-check codes.<n>We show that masked diffusion decoders are more accurate, often faster on average, and always faster in the worst case than other state-of-the-art decoders.
arXiv Detail & Related papers (2025-09-26T13:46:52Z) - One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models [65.96186414865747]
Text-to-Image (T2I) diffusion models face a trade-off between inference speed and image quality.<n>We introduce the first Time-independent Unified TiUE for the student model UNet architecture.<n>Using a one-pass scheme, TiUE shares encoder features across multiple decoder time steps, enabling parallel sampling.
arXiv Detail & Related papers (2025-05-28T04:23:22Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.<n>To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.<n>Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - Efficient Transformer-based Decoder for Varshamov-Tenengolts Codes [1.53119329713143]
Varshamov-Tenengolts (VT) codes, primarily designed for single-error correction, have emerged as a central research focus.<n>While existing decoding methods achieve high accuracy in correcting a single error, they often fail to correct multiple IDS errors.<n>In this work, we observe that VT codes retain some capability for addressing multiple errors by introducing a transformer-based VT decoder.
arXiv Detail & Related papers (2025-02-28T13:59:14Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.