Privacy at Scale in Networked Healthcare
- URL: http://arxiv.org/abs/2601.04298v1
- Date: Wed, 07 Jan 2026 17:58:58 GMT
- Title: Privacy at Scale in Networked Healthcare
- Authors: M. Amin Rahimian, Benjamin Panny, James Joshi,
- Abstract summary: Digitized, networked healthcare promises earlier detection, precision therapeutics, and continuous care.<n>Yet, it also expands the surface for privacy loss and compliance risk.<n>We argue for a shift from siloed, application-specific protections to privacy-by-design at scale.
- Score: 1.0195618602298682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digitized, networked healthcare promises earlier detection, precision therapeutics, and continuous care; yet, it also expands the surface for privacy loss and compliance risk. We argue for a shift from siloed, application-specific protections to privacy-by-design at scale, centered on decision-theoretic differential privacy (DP) across the full healthcare data lifecycle; network-aware privacy accounting for interdependence in people, sensors, and organizations; and compliance-as-code tooling that lets health systems share evidence while demonstrating regulatory due care. We synthesize the privacy-enhancing technology (PET) landscape in health (federated analytics, DP, cryptographic computation), identify practice gaps, and outline a deployable agenda involving privacy-budget ledgers, a control plane to coordinate PET components across sites, shared testbeds, and PET literacy, to make lawful, trustworthy sharing the default. We illustrate with use cases (multi-site trials, genomics, disease surveillance, mHealth) and highlight distributed inference as a workhorse for multi-institution learning under explicit privacy budgets.
Related papers
- Single-Pixel Vision-Language Model for Intrinsic Privacy-Preserving Behavioral Intelligence [55.512671026669516]
We propose the Single-Pixel Vision-Language Model (SP-VLM), a novel framework that reimagines secure environmental monitoring.<n>It achieves intrinsic privacy-by-design by capturing human dynamics through inherently low-dimensional single-pixel modalities.<n>We show that SP-VLM can nonetheless extract meaningful behavioral semantics, enabling robust anomaly detection, people counting, and activity understanding.
arXiv Detail & Related papers (2026-01-21T09:11:26Z) - Differential Privacy-Driven Framework for Enhancing Heart Disease Prediction [7.473832609768354]
Machine learning is critical in healthcare, supporting personalized treatment, early disease detection, predictive analytics, image interpretation, drug discovery, efficient operations, and patient monitoring.<n>In this paper, we utilize machine learning methodologies, including differential privacy and federated learning, to develop privacy-preserving models.<n>Our results show that using a federated learning model with differential privacy achieved a test accuracy of 85%, ensuring patient data remained secure and private throughout the process.
arXiv Detail & Related papers (2025-04-25T01:27:40Z) - Towards Privacy-Preserving Medical Imaging: Federated Learning with Differential Privacy and Secure Aggregation Using a Modified ResNet Architecture [0.0]
This research introduces a federated learning framework that combines local differential privacy and secure aggregation.<n>We also propose DPResNet, a modified ResNet architecture optimized for differential privacy.
arXiv Detail & Related papers (2024-12-01T05:52:29Z) - Data Obfuscation through Latent Space Projection (LSP) for Privacy-Preserving AI Governance: Case Studies in Medical Diagnosis and Finance Fraud Detection [0.0]
This paper introduces Data Obfuscation through Latent Space Projection (LSP), a novel technique aimed at enhancing AI governance and ensuring Responsible AI compliance.
LSP uses machine learning to project sensitive data into a latent space, effectively obfuscating it while preserving essential features for model training and inference.
We validate LSP's effectiveness through experiments on benchmark datasets and two real-world case studies: healthcare cancer diagnosis and financial fraud analysis.
arXiv Detail & Related papers (2024-10-22T22:31:03Z) - A Qualitative Analysis Framework for mHealth Privacy Practices [0.0]
This paper introduces a novel framework for the qualitative evaluation of privacy practices in mHealth apps.
Our investigation encompasses an analysis of 152 leading mHealth apps on the Android platform.
Our findings indicate persistent issues with negligence and misuse of sensitive user information.
arXiv Detail & Related papers (2024-05-28T08:57:52Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Towards Blockchain-Assisted Privacy-Aware Data Sharing For Edge
Intelligence: A Smart Healthcare Perspective [19.208368632576153]
Linkage attack is a type of dominant attack in the privacy domain.
adversaries launch poisoning attacks to falsify the health data, which leads to misdiagnosing or even physical damage.
To protect private health data, we propose a personalized differential privacy model based on the trust levels among users.
arXiv Detail & Related papers (2023-06-29T02:06:04Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.