Knowledge-Guided Multiview Deep Curriculum Learning for Elbow Fracture
Classification
- URL: http://arxiv.org/abs/2110.10383v1
- Date: Wed, 20 Oct 2021 05:42:20 GMT
- Title: Knowledge-Guided Multiview Deep Curriculum Learning for Elbow Fracture
Classification
- Authors: Jun Luo, Gene Kitamura, Dooman Arefan, Emine Doganay, Ashok Panigrahy,
Shandong Wu
- Abstract summary: Elbow fracture diagnosis often requires patients to take both frontal and lateral views of elbow X-ray radiographs.
We propose a multiview deep learning method for an elbow fracture subtype classification task.
- Score: 4.305082635886227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Elbow fracture diagnosis often requires patients to take both frontal and
lateral views of elbow X-ray radiographs. In this paper, we propose a multiview
deep learning method for an elbow fracture subtype classification task. Our
strategy leverages transfer learning by first training two single-view models,
one for frontal view and the other for lateral view, and then transferring the
weights to the corresponding layers in the proposed multiview network
architecture. Meanwhile, quantitative medical knowledge was integrated into the
training process through a curriculum learning framework, which enables the
model to first learn from "easier" samples and then transition to "harder"
samples to reach better performance. In addition, our multiview network can
work both in a dual-view setting and with a single view as input. We evaluate
our method through extensive experiments on a classification task of elbow
fracture with a dataset of 1,964 images. Results show that our method
outperforms two related methods on bone fracture study in multiple settings,
and our technique is able to boost the performance of the compared methods. The
code is available at https://github.com/ljaiverson/multiview-curriculum.
Related papers
- Joint chest X-ray diagnosis and clinical visual attention prediction with multi-stage cooperative learning: enhancing interpretability [2.64700310378485]
We introduce a novel deep-learning framework for joint disease diagnosis and prediction of corresponding visual saliency maps for chest X-ray scans.
Specifically, we designed a novel dual-encoder multi-task UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder.
Experiments show that our proposed method outperformed existing techniques for chest X-ray diagnosis and the quality of visual saliency map prediction.
arXiv Detail & Related papers (2024-03-25T17:31:12Z) - Towards Robust Natural-Looking Mammography Lesion Synthesis on
Ipsilateral Dual-Views Breast Cancer Analysis [1.1098503592431275]
Two major issues of mammogram classification tasks are leveraging multi-view mammographic information and class-imbalance handling.
We propose a simple but novel method for enhancing examined view (main view) by leveraging low-level feature information from the auxiliary view.
We also propose a simple but novel malignant mammogram synthesis framework for up synthesizing minor class samples.
arXiv Detail & Related papers (2023-09-07T06:33:30Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Medical Knowledge-Guided Deep Curriculum Learning for Elbow Fracture
Diagnosis from X-Ray Images [4.5617336730758735]
We propose a novel deep learning method to diagnose elbow fracture from elbow X-ray images.
In our method, the training data are permutated by sampling without replacement at the beginning of each training epoch.
The sampling probability of each training sample is guided by a scoring criterion constructed based on clinically known knowledge from human experts.
arXiv Detail & Related papers (2021-10-20T05:24:35Z) - Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - Distribution Alignment: A Unified Framework for Long-tail Visual
Recognition [52.36728157779307]
We propose a unified distribution alignment strategy for long-tail visual recognition.
We then introduce a generalized re-weight method in the two-stage learning to balance the class prior.
Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework.
arXiv Detail & Related papers (2021-03-30T14:09:53Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical
Images [13.690075845927606]
We propose a novel multitask learning model, namely MultiMix, which jointly learns disease classification and anatomical segmentation in a sparingly supervised manner.
Our experiments justify the effectiveness of our multitasking model for the classification of pneumonia and segmentation of lungs from chest X-ray images.
arXiv Detail & Related papers (2020-10-28T03:47:29Z) - Medical-based Deep Curriculum Learning for Improved Fracture
Classification [36.54112505898611]
We propose and compare several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images.
Our strategies are derived from knowledge such as medical decision trees and inconsistencies in the annotations of multiple experts.
Our results show that, compared to class-uniform and random strategies, the proposed medical knowledge-based curriculum, performs up to 15% better in terms of accuracy.
arXiv Detail & Related papers (2020-04-01T14:56:43Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.