Exploring Non-contrastive Self-supervised Representation Learning for Image-based Profiling
- URL: http://arxiv.org/abs/2506.14265v1
- Date: Tue, 17 Jun 2025 07:25:57 GMT
- Title: Exploring Non-contrastive Self-supervised Representation Learning for Image-based Profiling
- Authors: Siran Dai, Qianqian Xu, Peisong Wen, Yang Liu, Qingming Huang,
- Abstract summary: SSLProfiler is a non-contrastive SSL framework specifically designed for cell profiling.<n>We introduce specialized data augmentation and representation post-processing methods tailored to cell images.<n>With these improvements, SSLProfiler won the Cell Line Transferability challenge at CVPR 2025.
- Score: 80.09819072780193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image-based cell profiling aims to create informative representations of cell images. This technique is critical in drug discovery and has greatly advanced with recent improvements in computer vision. Inspired by recent developments in non-contrastive Self-Supervised Learning (SSL), this paper provides an initial exploration into training a generalizable feature extractor for cell images using such methods. However, there are two major challenges: 1) There is a large difference between the distributions of cell images and natural images, causing the view-generation process in existing SSL methods to fail; and 2) Unlike typical scenarios where each representation is based on a single image, cell profiling often involves multiple input images, making it difficult to effectively combine all available information. To overcome these challenges, we propose SSLProfiler, a non-contrastive SSL framework specifically designed for cell profiling. We introduce specialized data augmentation and representation post-processing methods tailored to cell images, which effectively address the issues mentioned above and result in a robust feature extractor. With these improvements, SSLProfiler won the Cell Line Transferability challenge at CVPR 2025.
Related papers
- MaskedCLIP: Bridging the Masked and CLIP Space for Semi-Supervised Medical Vision-Language Pre-training [27.35164449801058]
State-of-the-art methods leverage either paired image-text data via vision-language pre-training or unpaired image data via self-supervised pre-training to learn foundation models.<n>We propose MaskedCLIP, a synergistic masked image modeling and contrastive language-image pre-training framework.
arXiv Detail & Related papers (2025-07-23T06:15:54Z) - MIRAM: Masked Image Reconstruction Across Multiple Scales for Breast Lesion Risk Prediction [2.0199924721373392]
Masked image modeling (MIM) has emerged as a more potent SSL technique.<n>This research paper introduces a scalable and practical SSL approach centered around more challenging pretext tasks.<n>Our hypothesis posits that reconstructing high-resolution images enables the model to attend to finer spatial details.
arXiv Detail & Related papers (2025-03-10T10:32:55Z) - Discriminative Image Generation with Diffusion Models for Zero-Shot Learning [53.44301001173801]
We present DIG-ZSL, a novel Discriminative Image Generation framework for Zero-Shot Learning.<n>We learn a discriminative class token (DCT) for each unseen class under the guidance of a pre-trained category discrimination model (CDM)<n>In this paper, the extensive experiments and visualizations on four datasets show that our DIG-ZSL: (1) generates diverse and high-quality images, (2) outperforms previous state-of-the-art nonhuman-annotated semantic prototype-based methods by a large margin, and (3) achieves comparable or better performance than baselines that leverage human-annot
arXiv Detail & Related papers (2024-12-23T02:18:54Z) - Gen-SIS: Generative Self-augmentation Improves Self-supervised Learning [52.170253590364545]
Gen-SIS is a diffusion-based augmentation technique trained exclusively on unlabeled image data.<n>We show that these self-augmentations', i.e. generative augmentations based on the vanilla SSL encoder embeddings, facilitate the training of a stronger SSL encoder.
arXiv Detail & Related papers (2024-12-02T16:20:59Z) - CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition [73.51329037954866]
We propose a robust global representation method with cross-image correlation awareness for visual place recognition.
Our method uses the attention mechanism to correlate multiple images within a batch.
Our method outperforms state-of-the-art methods by a large margin with significantly less training time.
arXiv Detail & Related papers (2024-02-29T15:05:11Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - ProS: Facial Omni-Representation Learning via Prototype-based
Self-Distillation [22.30414271893046]
Prototype-based Self-Distillation (ProS) is a novel approach for unsupervised face representation learning.
ProS consists of two vision-transformers (teacher and student models) that are trained with different augmented images.
ProS achieves state-of-the-art performance on various tasks, both in full and few-shot settings.
arXiv Detail & Related papers (2023-11-03T14:10:06Z) - GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation [5.049466204159458]
Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data.
In this paper, we propose an SSL approach for segmenting histopathological images via generative diffusion models.
Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task.
arXiv Detail & Related papers (2023-09-04T09:49:24Z) - Zero-Shot Learning by Harnessing Adversarial Samples [52.09717785644816]
We propose a novel Zero-Shot Learning (ZSL) approach by Harnessing Adversarial Samples (HAS)
HAS advances ZSL through adversarial training which takes into account three crucial aspects.
We demonstrate the effectiveness of our adversarial samples approach in both ZSL and Generalized Zero-Shot Learning (GZSL) scenarios.
arXiv Detail & Related papers (2023-08-01T06:19:13Z) - SSiT: Saliency-guided Self-supervised Image Transformer for Diabetic
Retinopathy Grading [2.0790896742002274]
Saliency-guided Self-Supervised image Transformer (SSiT) is proposed for Diabetic Retinopathy grading from fundus images.
We novelly introduce saliency maps into SSL, with a goal of guiding self-supervised pre-training with domain-specific prior knowledge.
arXiv Detail & Related papers (2022-10-20T02:35:26Z) - Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation [34.45302976822067]
Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics.
We propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions.
Our method is evaluated on glands and nuclei datasets.
arXiv Detail & Related papers (2020-12-17T02:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.