PrivFED -- A Framework for Privacy-Preserving Federated Learning in Enhanced Breast Cancer Diagnosis
- URL: http://arxiv.org/abs/2405.08084v1
- Date: Mon, 13 May 2024 18:01:57 GMT
- Title: PrivFED -- A Framework for Privacy-Preserving Federated Learning in Enhanced Breast Cancer Diagnosis
- Authors: Maithili Jha, S. Maitri, M. Lohithdakshan, Shiny Duela J, K. Raja,
- Abstract summary: This study introduces a federated learning framework, trained on the Wisconsin dataset, to mitigate challenges such as data scarcity and imbalance.
The model exhibits an average accuracy of 99.95% on edge devices and 98% on the central server.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the day-to-day operations of healthcare institutions, a multitude of Personally Identifiable Information (PII) data exchanges occur, exposing the data to a spectrum of cybersecurity threats. This study introduces a federated learning framework, trained on the Wisconsin dataset, to mitigate challenges such as data scarcity and imbalance. Techniques like the Synthetic Minority Over-sampling Technique (SMOTE) are incorporated to bolster robustness, while isolation forests are employed to fortify the model against outliers. Catboost serves as the classification tool across all devices. The identification of optimal features for heightened accuracy is pursued through Principal Component Analysis (PCA),accentuating the significance of hyperparameter tuning, as underscored in a comparative analysis. The model exhibits an average accuracy of 99.95% on edge devices and 98% on the central server.
Related papers
- Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare [9.381558154295012]
Segment Anything Model (SAM) excels in intelligent image segmentation.
SAM poses significant challenges for deployment on resource-limited edge devices.
We propose a data-free quantization framework for SAM, called DFQ-SAM, which learns and calibrates quantization parameters without any original data.
arXiv Detail & Related papers (2024-09-14T10:43:35Z) - Electroencephalogram Emotion Recognition via AUC Maximization [0.0]
Imbalanced datasets pose significant challenges in areas including neuroscience, cognitive science, and medical diagnostics.
This study addresses the issue class imbalance, using the Liking' label in the DEAP dataset as an example.
arXiv Detail & Related papers (2024-08-16T19:08:27Z) - Machine learning-based network intrusion detection for big and
imbalanced data using oversampling, stacking feature embedding and feature
extraction [6.374540518226326]
Intrusion Detection Systems (IDS) play a critical role in protecting interconnected networks by detecting malicious actors and activities.
This paper introduces a novel ML-based network intrusion detection model that uses Random Oversampling (RO) to address data imbalance and Stacking Feature Embedding (PCA) for dimension reduction.
Using the CIC-IDS 2017 dataset, DT, RF, and ET models reach 99.99% accuracy, while DT and RF models obtain 99.94% accuracy on CIC-IDS 2018 dataset.
arXiv Detail & Related papers (2024-01-22T05:49:41Z) - Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact
Detection with Self-Supervised Learning [0.0]
This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from this data.
Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations.
This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
arXiv Detail & Related papers (2024-01-02T04:00:48Z) - Comparative Analysis of Imbalanced Malware Byteplot Image Classification
using Transfer Learning [0.873811641236639]
Malware detectors help cyber-attacks by comparing malware signatures.
In this paper, the performance of six multiclass classification models is compared.
It is observed that the more the class imbalance less the number of epochs required for convergence.
arXiv Detail & Related papers (2023-10-04T11:33:36Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.