Meta-repository of screening mammography classifiers
- URL: http://arxiv.org/abs/2108.04800v1
- Date: Tue, 10 Aug 2021 17:39:26 GMT
- Title: Meta-repository of screening mammography classifiers
- Authors: Benjamin Stadnick, Jan Witowski, Vishwaesh Rajiv, Jakub
Ch{\l}\k{e}dowski, Farah E. Shamout, Kyunghyun Cho and Krzysztof J. Geras
- Abstract summary: We release a meta-repository containing deep learning models for classification of screening mammograms.
At its inception, our meta-repository contains five state-of-the-art models with open-source implementations.
We compare their performance on five international data sets.
- Score: 35.24447276237306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) is transforming medicine and showing promise in
improving clinical diagnosis. In breast cancer screening, several recent
studies show that AI has the potential to improve radiologists' accuracy,
subsequently helping in early cancer diagnosis and reducing unnecessary workup.
As the number of proposed models and their complexity grows, it is becoming
increasingly difficult to re-implement them in order to reproduce the results
and to compare different approaches. To enable reproducibility of research in
this application area and to enable comparison between different methods, we
release a meta-repository containing deep learning models for classification of
screening mammograms. This meta-repository creates a framework that enables the
evaluation of machine learning models on any private or public screening
mammography data set. At its inception, our meta-repository contains five
state-of-the-art models with open-source implementations and cross-platform
compatibility. We compare their performance on five international data sets:
two private New York University breast cancer screening data sets as well as
three public (DDSM, INbreast and Chinese Mammography Database) data sets. Our
framework has a flexible design that can be generalized to other medical image
analysis tasks. The meta-repository is available at
https://www.github.com/nyukat/mammography_metarepository.
Related papers
- Deep BI-RADS Network for Improved Cancer Detection from Mammograms [3.686808512438363]
We introduce a novel multi-modal approach that combines textual BI-RADS lesion descriptors with visual mammogram content.
Our method employs iterative attention layers to effectively fuse these different modalities.
Experiments on the CBIS-DDSM dataset demonstrate substantial improvements across all metrics.
arXiv Detail & Related papers (2024-11-16T21:32:51Z) - Potential of Multimodal Large Language Models for Data Mining of Medical Images and Free-text Reports [51.45762396192655]
Multimodal large language models (MLLMs) have recently transformed many domains, significantly affecting the medical field. Notably, Gemini-Vision-series (Gemini) and GPT-4-series (GPT-4) models have epitomized a paradigm shift in Artificial General Intelligence for computer vision.
This study evaluated the performance of the Gemini, GPT-4, and 4 popular large models for an exhaustive evaluation across 14 medical imaging datasets.
arXiv Detail & Related papers (2024-07-08T09:08:42Z) - Key Patches Are All You Need: A Multiple Instance Learning Framework For Robust Medical Diagnosis [15.964609888720315]
We propose to limit the amount of information deep learning models use to reach the final classification, by using a multiple instance learning framework.
We evaluate our framework on two medical applications: skin cancer diagnosis using dermoscopy and breast cancer diagnosis using mammography.
arXiv Detail & Related papers (2024-05-02T18:21:25Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Federated Learning with Research Prototypes for Multi-Center MRI-based
Detection of Prostate Cancer with Diverse Histopathology [3.8613414331251423]
We introduce a flexible federated learning framework for cross-site training, validation, and evaluation of deep prostate cancer detection algorithms.
Our results show increases in prostate cancer detection and classification accuracy using a specialized neural network model and diverse prostate biopsy data.
We open-source our FLtools system, which can be easily adapted to other deep learning projects for medical imaging.
arXiv Detail & Related papers (2022-06-11T21:28:17Z) - Metastatic Cancer Outcome Prediction with Injective Multiple Instance
Pooling [1.0965065178451103]
We process two public datasets to set up a benchmark cohort of 341 patient in total for studying outcome prediction of metastatic cancer.
We propose two injective multiple instance pooling functions that are better suited to outcome prediction.
Our results show that multiple instance learning with injective pooling functions can achieve state-of-the-art performance in the non-small-cell lung cancer CT and head and neck CT outcome prediction benchmarking tasks.
arXiv Detail & Related papers (2022-03-09T16:58:03Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Synthesizing lesions using contextual GANs improves breast cancer
classification on mammograms [0.4297070083645048]
We present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms.
With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs.
arXiv Detail & Related papers (2020-05-29T21:23:00Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.