Diffusion-Augmented Contrastive Learning: A Noise-Robust Encoder for Biosignal Representations
- URL: http://arxiv.org/abs/2509.20048v2
- Date: Thu, 25 Sep 2025 23:19:39 GMT
- Title: Diffusion-Augmented Contrastive Learning: A Noise-Robust Encoder for Biosignal Representations
- Authors: Rami Zewail,
- Abstract summary: We propose a novel hybrid framework, Diffusion-Augmented Contrastive Learning (DACL), that fuses concepts from diffusion models and supervised contrastive learning.<n>It operates on a latent space created by a lightweight Variational Autoencoder (VAE) trained on our novel Scattering Transformer (ST) features.<n>A U-Net style encoder is then trained with a supervised contrastive objective to learn a representation that balances class discrimination with robustness to noise across various diffusion time steps.
- Score: 0.4061135251278187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning robust representations for biosignals is often hampered by the challenge of designing effective data augmentations.Traditional methods can fail to capture the complex variations inherent in physiological data. Within this context, we propose a novel hybrid framework, Diffusion-Augmented Contrastive Learning (DACL), that fuses concepts from diffusion models and supervised contrastive learning. The DACL framework operates on a latent space created by a lightweight Variational Autoencoder (VAE) trained on our novel Scattering Transformer (ST) features [12]. It utilizes the diffusion forward process as a principled data augmentation technique to generate multiple noisy views of these latent embeddings. A U-Net style encoder is then trained with a supervised contrastive objective to learn a representation that balances class discrimination with robustness to noise across various diffusion time steps. We evaluated this proof-of-concept method on the PhysioNet 2017 ECG dataset, achieving a competitive AUROC of 0.7815. This work establishes a new paradigm for representation learning by using the diffusion process itself to drive the contrastive objective, creating noise-invariant embeddings that demonstrate a strong foundation for class separability.
Related papers
- Guiding Diffusion-based Reconstruction with Contrastive Signals for Balanced Visual Representation [81.40978077888693]
Contrastive Language-Image Pre-training (CLIP) has become a key bottleneck for downstream performance.<n>Recent solutions use diffusion models to enhance representations by conditioning image reconstruction on CLIP visual tokens.<n>We integrate contrastive signals into diffusion-based reconstruction to pursue more comprehensive visual representations.
arXiv Detail & Related papers (2026-03-05T04:45:49Z) - InfoDCL: Informative Noise Enhanced Diffusion Based Contrastive Learning [14.525824265656558]
We propose a novel diffusion-based contrastive learning framework for recommendation.<n>We employ a single-step diffusion process that integrates noise with auxiliary semantic information to generate signals.<n>Experiments on five real-world datasets demonstrate that InfoDCL significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-12-18T14:15:31Z) - Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL [19.094835780362775]
Few-Shot Class-Incremental Learning (FSCIL) challenges models to sequentially learn new classes from minimal examples.<n>Current FSCIL methods often struggle with generalization due to their reliance on limited datasets.<n>This paper introduces Diffusion-Classifier Synergy (DCS), a novel framework that establishes a mutual boosting loop between diffusion model and FSCIL classifier.
arXiv Detail & Related papers (2025-10-04T01:48:52Z) - FedDifRC: Unlocking the Potential of Text-to-Image Diffusion Models in Heterogeneous Federated Learning [12.366529890744822]
Federated learning aims at training models collaboratively across participants while protecting privacy.<n>One major challenge for this paradigm is the data heterogeneity issue, where biased data preferences across multiple clients, harming the model's consistency and performance.<n>In this paper, we first introduce powerful diffusion models into a novel Federated paradigm with Diffusion Representation Collaboration (FedDifRC)<n>FedDifRC is a text-driven diffusion contrasting and noise-driven diffusion regularization, aiming to provide abundant class-related semantic information and consistent convergence signals.
arXiv Detail & Related papers (2025-07-09T01:57:57Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.<n>To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.<n>Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer [95.80384464922147]
ACDiT is a blockwise Conditional Diffusion Transformer.<n>It offers a flexible between token-wise autoregression and full-sequence diffusion.<n>We show that ACDiT performs best among all autoregressive baselines on image and video generation tasks.
arXiv Detail & Related papers (2024-12-10T18:13:20Z) - Data-Free Federated Class Incremental Learning with Diffusion-Based Generative Memory [27.651921957220004]
We introduce a novel data-free federated class incremental learning framework with diffusion-based generative memory (DFedDGM)
We design a new balanced sampler to help train the diffusion models to alleviate the common non-IID problem in FL.
We also introduce an entropy-based sample filtering technique from an information theory perspective to enhance the quality of generative samples.
arXiv Detail & Related papers (2024-05-22T20:59:18Z) - SD-DiT: Unleashing the Power of Self-supervised Discrimination in Diffusion Transformer [102.39050180060913]
Diffusion Transformer (DiT) has emerged as the new trend of generative diffusion models on image generation.
Recent breakthroughs have been driven by mask strategy that significantly improves the training efficiency of DiT with additional intra-image contextual learning.
In this work, we address these limitations by novelly unleashing the self-supervised discrimination knowledge to boost DiT training.
arXiv Detail & Related papers (2024-03-25T17:59:35Z) - Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via
Self-supervised Learning [42.009856923352864]
diffusion models have been adopted for behavioral cloning in a sequence modeling fashion.
We propose Crossway Diffusion, a simple yet effective method to enhance diffusion-based visuomotor policy learning.
Our experiments demonstrate the effectiveness of Crossway Diffusion in various simulated and real-world robot tasks.
arXiv Detail & Related papers (2023-07-04T17:59:29Z) - Denoising Diffusion Autoencoders are Unified Self-supervised Learners [58.194184241363175]
This paper shows that the networks in diffusion models, namely denoising diffusion autoencoders (DDAE), are unified self-supervised learners.
DDAE has already learned strongly linear-separable representations within its intermediate layers without auxiliary encoders.
Our diffusion-based approach achieves 95.9% and 50.0% linear evaluation accuracies on CIFAR-10 and Tiny-ImageNet.
arXiv Detail & Related papers (2023-03-17T04:20:47Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.