Continual Domain Incremental Learning for Privacy-aware Digital Pathology
- URL: http://arxiv.org/abs/2409.06455v1
- Date: Tue, 10 Sep 2024 12:21:54 GMT
- Title: Continual Domain Incremental Learning for Privacy-aware Digital Pathology
- Authors: Pratibha Kumari, Daniel Reisenbüchler, Lucas Luttner, Nadine S. Schaadt, Friedrich Feuerhake, Dorit Merhof,
- Abstract summary: Continual learning (CL) techniques aim to reduce the forgetting of past data when learning new data with distributional shift conditions.
We develop a Generative Latent Replay-based CL (GLRCL) approach to store past data and perform latent replay with new data.
- Score: 3.6630930118966814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been remarkable progress in the field of digital pathology, driven by the ability to model complex tissue patterns using advanced deep-learning algorithms. However, the robustness of these models is often severely compromised in the presence of data shifts (e.g., different stains, organs, centers, etc.). Alternatively, continual learning (CL) techniques aim to reduce the forgetting of past data when learning new data with distributional shift conditions. Specifically, rehearsal-based CL techniques, which store some past data in a buffer and then replay it with new data, have proven effective in medical image analysis tasks. However, privacy concerns arise as these approaches store past data, prompting the development of our novel Generative Latent Replay-based CL (GLRCL) approach. GLRCL captures the previous distribution through Gaussian Mixture Models instead of storing past samples, which are then utilized to generate features and perform latent replay with new data. We systematically evaluate our proposed framework under different shift conditions in histopathology data, including stain and organ shift. Our approach significantly outperforms popular buffer-free CL approaches and performs similarly to rehearsal-based CL approaches that require large buffers causing serious privacy violations.
Related papers
- TS-ACL: A Time Series Analytic Continual Learning Framework for Privacy-Preserving and Class-Incremental Pattern Recognition [14.108911377558242]
TS-ACL is a novel framework for privacy-preserving and class-incremental pattern recognition.
It transforms each update of the model into a gradient-free analytical learning process with a closed-form solution.
It simultaneously achieves non-forgetting, privacy preservation, and lightweight consumption, making it widely suitable for various applications.
arXiv Detail & Related papers (2024-10-21T12:34:02Z) - Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - CCSI: Continual Class-Specific Impression for Data-free Class Incremental Learning [22.37848405465699]
Class incremental learning offers a promising solution by adapting a deep network trained on specific disease classes to handle new diseases.
Prior proposed methodologies to overcome this require perpetual storage of previous samples.
We propose a novel data-free class incremental learning framework that utilizes data synthesis on learned classes instead of data storage from previous classes.
arXiv Detail & Related papers (2024-06-09T03:52:21Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Detecting Morphing Attacks via Continual Incremental Training [10.796380524798744]
Recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites.
We investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available.
Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms.
arXiv Detail & Related papers (2023-07-27T17:48:29Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - The CLEAR Benchmark: Continual LEArning on Real-World Imagery [77.98377088698984]
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI.
We introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts.
We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms.
arXiv Detail & Related papers (2022-01-17T09:09:09Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.