Memory-aware curriculum federated learning for breast cancer
classification
- URL: http://arxiv.org/abs/2107.02504v1
- Date: Tue, 6 Jul 2021 09:50:20 GMT
- Title: Memory-aware curriculum federated learning for breast cancer
classification
- Authors: Amelia Jim\'enez-S\'anchez, Mickael Tardy, Miguel A. Gonz\'alez
Ballester, Diana Mateus, Gemma Piella
- Abstract summary: For early breast cancer detection, regular screening mammography imaging is recommended.
A potential solution to such class-imbalance is joining forces across multiple institutions.
Recently, federated learning has emerged as an effective tool for collaborative learning.
- Score: 2.244916866651468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For early breast cancer detection, regular screening with mammography imaging
is recommended. Routinary examinations result in datasets with a predominant
amount of negative samples. A potential solution to such class-imbalance is
joining forces across multiple institutions. Developing a collaborative
computer-aided diagnosis system is challenging in different ways. Patient
privacy and regulations need to be carefully respected. Data across
institutions may be acquired from different devices or imaging protocols,
leading to heterogeneous non-IID data. Also, for learning-based methods, new
optimization strategies working on distributed data are required. Recently,
federated learning has emerged as an effective tool for collaborative learning.
In this setting, local models perform computation on their private data to
update the global model. The order and the frequency of local updates influence
the final global model. Hence, the order in which samples are locally presented
to the optimizers plays an important role. In this work, we define a
memory-aware curriculum learning method for the federated setting. Our
curriculum controls the order of the training samples paying special attention
to those that are forgotten after the deployment of the global model. Our
approach is combined with unsupervised domain adaptation to deal with domain
shift while preserving data privacy. We evaluate our method with three clinical
datasets from different vendors. Our results verify the effectiveness of
federated adversarial learning for the multi-site breast cancer classification.
Moreover, we show that our proposed memory-aware curriculum method is
beneficial to further improve classification performance. Our code is publicly
available at: https://github.com/ameliajimenez/curriculum-federated-learning.
Related papers
- CL3: A Collaborative Learning Framework for the Medical Data Ensuring Data Privacy in the Hyperconnected Environment [1.223961905359498]
In a hyperconnected environment, medical institutions are concerned with data privacy when sharing and transmitting sensitive patient information.
A collaborative learning framework, including transfer, federated, and incremental learning, can generate efficient, secure, and scalable models.
This study aims to address the detection of COVID-19 using chest X-ray images through a proposed collaborative learning framework called CL3.
arXiv Detail & Related papers (2024-10-10T13:29:12Z) - Medical Federated Model with Mixture of Personalized and Sharing
Components [31.068735334318088]
We propose a new personalized framework of federated learning to handle the problem.
It successfully yields personalized models based on awareness of similarity between local data.
Also, we propose an effective method to reduce the computational cost, which improves computation efficiency significantly.
arXiv Detail & Related papers (2023-06-26T07:50:32Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.