Sample hardness based gradient loss for long-tailed cervical cell
detection
- URL: http://arxiv.org/abs/2208.03779v1
- Date: Sun, 7 Aug 2022 17:52:29 GMT
- Title: Sample hardness based gradient loss for long-tailed cervical cell
detection
- Authors: Minmin Liu, Xuechen Li, Xiangbo Gao, Junliang Chen, Linlin Shen, Huisi
Wu
- Abstract summary: We propose aGrad-Libra Loss to dynamically calibrate the degree of hardness of each sample for different categories.
Our loss can thus help the detector to put more emphasis on those hard samples in both head and tail categories.
- Score: 40.503143547742866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the difficulty of cancer samples collection and annotation, cervical
cancer datasets usually exhibit a long-tailed data distribution. When training
a detector to detect the cancer cells in a WSI (Whole Slice Image) image
captured from the TCT (Thinprep Cytology Test) specimen, head categories (e.g.
normal cells and inflammatory cells) typically have a much larger number of
samples than tail categories (e.g. cancer cells). Most existing
state-of-the-art long-tailed learning methods in object detection focus on
category distribution statistics to solve the problem in the long-tailed
scenario without considering the "hardness" of each sample. To address this
problem, in this work we propose a Grad-Libra Loss that leverages the gradients
to dynamically calibrate the degree of hardness of each sample for different
categories, and re-balance the gradients of positive and negative samples. Our
loss can thus help the detector to put more emphasis on those hard samples in
both head and tail categories. Extensive experiments on a long-tailed TCT WSI
image dataset show that the mainstream detectors, e.g. RepPoints, FCOS, ATSS,
YOLOF, etc. trained using our proposed Gradient-Libra Loss, achieved much
higher (7.8%) mAP than that trained using cross-entropy classification loss.
Related papers
- LoGex: Improved tail detection of extremely rare histopathology classes via guided diffusion [36.56346240815833]
In realistic medical settings, the data are often long-tailed, with most samples concentrated in a few classes and a long tail of rare classes, usually containing just a few samples.
This distribution presents a significant challenge because rare conditions are critical to detect and difficult to classify due to limited data.
In this paper, rather than attempting to classify rare classes, we aim to detect these as out-of-distribution data reliably.
arXiv Detail & Related papers (2024-09-02T15:18:15Z) - Imbalanced Aircraft Data Anomaly Detection [103.01418862972564]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task.
We propose a Graphical Temporal Data Analysis framework.
It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL)
arXiv Detail & Related papers (2023-05-17T09:37:07Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - On the Optimal Combination of Cross-Entropy and Soft Dice Losses for
Lesion Segmentation with Out-of-Distribution Robustness [15.08731999725517]
We study the impact of different loss functions on lesion segmentation from medical images.
We analyze the impact of the minimization of different loss functions on in-distribution performance.
Our findings are surprising: CE-Dice loss combinations that excel in segmenting in-distribution images have a poor performance when dealing with Out-of-Distribution data.
arXiv Detail & Related papers (2022-09-13T15:32:32Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Out of distribution detection for skin and malaria images [5.37275632397777]
We propose an approach to robustly classify OoD samples in skin and malaria images without the need to access labeled OoD samples during training.
We use metric learning along with logistic regression to force the deep networks to learn much rich class representative features.
We achieved state-of-the-art results, improving 5% and 4% in TNR@TPR95% over the previous state-of-the-art for skin cancer and malaria OoD detection respectively.
arXiv Detail & Related papers (2021-11-02T11:16:07Z) - Hardness of Samples Is All You Need: Protecting Deep Learning Models
Using Hardness of Samples [1.2074552857379273]
We show that the hardness degree of model extraction attacks samples is distinguishable from the hardness degree of normal samples.
We propose Hardness-Oriented Detection Approach (HODA) to detect the sample sequences of model extraction attacks.
arXiv Detail & Related papers (2021-06-21T22:03:31Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Seesaw Loss for Long-Tailed Instance Segmentation [131.86306953253816]
We propose Seesaw Loss to dynamically re-balance gradients of positive and negative samples for each category.
The mitigation factor reduces punishments to tail categories w.r.t. the ratio of cumulative training instances between different categories.
The compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories.
arXiv Detail & Related papers (2020-08-23T12:44:45Z) - Decoupled Gradient Harmonized Detector for Partial Annotation:
Application to Signet Ring Cell Detection [13.530905176008057]
We propose Decoupled Gradient Harmonizing Mechanism (DGHM) and embed it into classification loss, denoted as DGHM-C loss.
Without whistles and bells, we achieved the 2nd place in the challenge.
arXiv Detail & Related papers (2020-04-09T09:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.