Bayesian Federated Cause-of-Death Classification and Quantification Under Distribution Shift
- URL: http://arxiv.org/abs/2505.02257v1
- Date: Sun, 04 May 2025 21:29:59 GMT
- Title: Bayesian Federated Cause-of-Death Classification and Quantification Under Distribution Shift
- Authors: Yu Zhu, Zehang Richard Li,
- Abstract summary: In regions lacking medically certified causes of death, verbal autopsy (VA) is a critical and widely used tool to ascertain the cause of death through interviews with caregivers.<n>Data collected by VAs are often analyzed using probabilistic algorithms.<n>Most existing VA algorithms rely on centralized training, requiring full access to training data for joint modeling.<n>We propose a novel Bayesian Federated Learning (BFL) framework that avoids data sharing across multiple training sources.
- Score: 5.86981636653323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In regions lacking medically certified causes of death, verbal autopsy (VA) is a critical and widely used tool to ascertain the cause of death through interviews with caregivers. Data collected by VAs are often analyzed using probabilistic algorithms. The performance of these algorithms often degrades due to distributional shift across populations. Most existing VA algorithms rely on centralized training, requiring full access to training data for joint modeling. This is often infeasible due to privacy and logistical constraints. In this paper, we propose a novel Bayesian Federated Learning (BFL) framework that avoids data sharing across multiple training sources. Our method enables reliable individual-level cause-of-death classification and population-level quantification of cause-specific mortality fractions (CSMFs), in a target domain with limited or no local labeled data. The proposed framework is modular, computationally efficient, and compatible with a wide range of existing VA algorithms as candidate models, facilitating flexible deployment in real-world mortality surveillance systems. We validate the performance of BFL through extensive experiments on two real-world VA datasets under varying levels of distribution shift. Our results show that BFL significantly outperforms the base models built on a single domain and achieves comparable or better performance compared to joint modeling.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume
Segmentation [17.096806029281385]
Federated learning (FL) enables multiple client medical institutes collaboratively train a deep learning (DL) model with privacy protection.
However, the performance of FL can be constrained by the limited availability of labeled data in small institutes and the heterogeneous (i.e., non-i.i.d.) data distribution across institutes.
We develop a vicinal feature-level data augmentation scheme to efficiently alleviate the local feature shift and facilitate collaborative training for privacy-aware FL segmentation.
arXiv Detail & Related papers (2023-10-23T21:14:52Z) - Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation
Encoding [21.76802204235636]
We propose representation encoding-based federated meta-learning (REFML) for few-shot fault diagnosis.
REFML harnesses the inherent generalization among training clients, effectively transforming it into an advantage for out-of-distribution.
It achieves an increase in accuracy by 2.17%-6.50% when tested on unseen working conditions of the same equipment type and 13.44%-18.33% when tested on totally unseen equipment types.
arXiv Detail & Related papers (2023-10-13T10:48:28Z) - Mind the Gap: Federated Learning Broadens Domain Generalization in
Diagnostic AI Models [2.192472845284658]
Using 610,000 chest radiographs from five institutions, we assessed diagnostic performance as a function of training strategy.
Large datasets showed minimal performance gains with FL but, in some instances, even exhibited decreases.
When trained collaboratively across diverse external institutions, AI models consistently surpassed models trained locally for off-domain tasks.
arXiv Detail & Related papers (2023-10-01T18:27:59Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Unifying and Personalizing Weakly-supervised Federated Medical Image
Segmentation via Adaptive Representation and Aggregation [1.121358474059223]
Federated learning (FL) enables multiple sites to collaboratively train powerful deep models without compromising data privacy and security.
Weakly supervised segmentation, which uses sparsely-grained supervision, is increasingly being paid attention to due to its great potential of reducing annotation costs.
We propose a novel personalized FL framework for medical image segmentation, named FedICRA, which uniformly leverages heterogeneous weak supervision.
arXiv Detail & Related papers (2023-04-12T06:32:08Z) - FV-UPatches: Enhancing Universality in Finger Vein Recognition [0.6299766708197883]
We propose a universal learning-based framework, which achieves generalization while training with limited data.
The proposed framework shows application potential in other vein-based biometric recognition as well.
arXiv Detail & Related papers (2022-06-02T14:20:22Z) - Flexible Amortized Variational Inference in qBOLD MRI [56.4324135502282]
Oxygen extraction fraction (OEF) and deoxygenated blood volume (DBV) are more ambiguously determined from the data.
Existing inference methods tend to yield very noisy and underestimated OEF maps, while overestimating DBV.
This work describes a novel probabilistic machine learning approach that can infer plausible distributions of OEF and DBV.
arXiv Detail & Related papers (2022-03-11T10:47:16Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.