Metadata-guided Consistency Learning for High Content Images
        - URL: http://arxiv.org/abs/2212.11595v2
- Date: Mon, 12 Jun 2023 09:21:03 GMT
- Title: Metadata-guided Consistency Learning for High Content Images
- Authors: Johan Fredin Haslum and Christos Matsoukas and Karl-Johan Leuchowius
  and Erik M\"ullers and Kevin Smith
- Abstract summary: Cross-Domain Consistency Learning (CDCL) is a self-supervised approach that is able to learn in the presence of batch effects.
CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals.
These features are organised according to their morphological changes and are more useful for downstream tasks.
- Score: 1.5207770161985628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract:   High content imaging assays can capture rich phenotypic response data for
large sets of compound treatments, aiding in the characterization and discovery
of novel drugs. However, extracting representative features from high content
images that can capture subtle nuances in phenotypes remains challenging. The
lack of high-quality labels makes it difficult to achieve satisfactory results
with supervised deep learning. Self-Supervised learning methods have shown
great success on natural images, and offer an attractive alternative also to
microscopy images. However, we find that self-supervised learning techniques
underperform on high content imaging assays. One challenge is the undesirable
domain shifts present in the data known as batch effects, which are caused by
biological noise or uncontrolled experimental conditions. To this end, we
introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach
that is able to learn in the presence of batch effects. CDCL enforces the
learning of biological similarities while disregarding undesirable
batch-specific signals, leading to more useful and versatile representations.
These features are organised according to their morphological changes and are
more useful for downstream tasks -- such as distinguishing treatments and
mechanism of action.
 
      
        Related papers
        - Diverse Image Generation with Diffusion Models and Cross Class Label   Learning for Polyp Classification [4.747649393635696]
 We develop a novel model, PathoPolyp-Diff, that generates text-controlled synthetic images with diverse characteristics.
We introduce cross-class label learning to make the model learn features from other classes, reducing the burdensome task of data annotation.
 arXiv  Detail & Related papers  (2025-02-08T04:26:20Z)
- Understanding and Improving Training-Free AI-Generated Image Detections   with Vision Foundation Models [68.90917438865078]
 Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
 arXiv  Detail & Related papers  (2024-11-28T13:04:45Z)
- Unleashing the Potential of Synthetic Images: A Study on Histopathology   Image Classification [0.12499537119440242]
 Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases.
We show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.
 arXiv  Detail & Related papers  (2024-09-24T12:02:55Z)
- Enhance Image Classification via Inter-Class Image Mixup with Diffusion   Model [80.61157097223058]
 A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
 arXiv  Detail & Related papers  (2024-03-28T17:23:45Z)
- Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
  with Free Attention Masks [64.67735676127208]
 Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
 arXiv  Detail & Related papers  (2023-08-13T10:07:46Z)
- GraVIS: Grouping Augmented Views from Independent Sources for
  Dermatology Analysis [52.04899592688968]
 We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
 arXiv  Detail & Related papers  (2023-01-11T11:38:37Z)
- Comparison of semi-supervised learning methods for High Content
  Screening quality control [0.34998703934432673]
 High-content screening (HCS) offers to quantify complex cellular phenotypes from images at high throughput.
This process can be obstructed by image aberrations such as out-of-focus image blur, fluorophore saturation, debris, a high level of noise, unexpected auto-fluorescence or empty images.
We evaluate deep learning options that do not require extensive image annotations to provide a straightforward and easy to use semi-supervised learning solution.
 arXiv  Detail & Related papers  (2022-08-09T08:14:36Z)
- Self-Supervised Vision Transformers Learn Visual Concepts in
  Histopathology [5.164102666113966]
 We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
 arXiv  Detail & Related papers  (2022-03-01T16:14:41Z)
- Texture Characterization of Histopathologic Images Using Ecological
  Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
 This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
 arXiv  Detail & Related papers  (2022-02-27T02:19:09Z)
- Positional Contrastive Learning for Volumetric Medical Image
  Segmentation [13.086140606803408]
 We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
 arXiv  Detail & Related papers  (2021-06-16T22:15:28Z)
- DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers
  for Biomedical Image Segmentation [13.707848142719424]
 We propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies.
In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties.
We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration.
 arXiv  Detail & Related papers  (2021-01-22T11:31:33Z)
- Deep Low-Shot Learning for Biological Image Classification and
  Visualization from Limited Training Samples [52.549928980694695]
 In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
 labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
 arXiv  Detail & Related papers  (2020-10-20T06:06:06Z)
- Multi-label Thoracic Disease Image Classification with Cross-Attention
  Networks [65.37531731899837]
 We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
 arXiv  Detail & Related papers  (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.