Robustly Optimized Deep Feature Decoupling Network for Fatty Liver Diseases Detection
- URL: http://arxiv.org/abs/2406.17338v1
- Date: Tue, 25 Jun 2024 07:50:09 GMT
- Title: Robustly Optimized Deep Feature Decoupling Network for Fatty Liver Diseases Detection
- Authors: Peng Huang, Shu Hu, Bo Peng, Jiashu Zhang, Xi Wu, Xin Wang,
- Abstract summary: Current medical image classification efforts mainly aim for higher average performance.
Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver.
We propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training.
- Score: 18.24448979368885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current medical image classification efforts mainly aim for higher average performance, often neglecting the balance between different classes. This can lead to significant differences in recognition accuracy between classes and obvious recognition weaknesses. Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver. In this paper, we propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training. Firstly, we employ two iteratively compressed decouplers to supervised decouple common features and specific features related to fatty liver in abdominal ultrasound images. Subsequently, the decoupled features are concatenated with the original image after transforming the color space and are fed into the classifier. During adversarial training, we adaptively adjust the perturbation and balance the adversarial strength by the accuracy of each class. The model will eliminate recognition weaknesses by correctly classifying adversarial samples, thus improving recognition robustness. Finally, the accuracy of our method improved by 4.16%, achieving 82.95%. As demonstrated by extensive experiments, our method is a generalized learning framework that can be directly used to eliminate the recognition weaknesses of any classifier while improving its average performance. Code is available at https://github.com/HP-ML/MICCAI2024.
Related papers
- Addressing Imbalance for Class Incremental Learning in Medical Image Classification [14.242875524728495]
We introduce two plug-in methods to mitigate the adverse effects of imbalance.
First, we propose a CIL-balanced classification loss to mitigate the classification bias toward majority classes.
Second, we propose a distribution margin loss that not only alleviates the inter-class overlap in embedding space but also enforces the intra-class compactness.
arXiv Detail & Related papers (2024-07-18T17:59:44Z) - Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Multiclass Burn Wound Image Classification Using Deep Convolutional
Neural Networks [0.0]
Continuous wound monitoring is important for wound specialists to allow more accurate diagnosis and optimization of management protocols.
In this study, we use a deep learning-based method to classify burn wound images into two or three different categories based on the wound conditions.
arXiv Detail & Related papers (2021-03-01T23:54:18Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Contraction Mapping of Feature Norms for Classifier Learning on the Data
with Different Quality [5.47982638565422]
We propose a contraction mapping function to compress the range of feature norms of training images according to their quality.
Experiments on various classification applications, including handwritten digit recognition, lung nodule classification, face verification and face recognition, demonstrate that the proposed approach is promising.
arXiv Detail & Related papers (2020-07-27T09:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.