Privacy-Preserving Dataset Combination
- URL: http://arxiv.org/abs/2502.05765v3
- Date: Fri, 17 Oct 2025 01:57:18 GMT
- Title: Privacy-Preserving Dataset Combination
- Authors: Keren Fuentes, Mimee Xu, Irene Chen,
- Abstract summary: We introduce SecureKL, a protocol for dataset-to-dataset evaluations with zero privacy leakage.<n>SecureKL evaluates a source dataset against candidates, performing dataset divergence metrics internally with private computations.<n>On real-world data, SecureKL achieves high consistency ($>90%$ correlation with non-private counterparts) and successfully identifies beneficial data collaborations.
- Score: 1.0485433579460999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Access to diverse, high-quality datasets is crucial for machine learning model performance, yet data sharing remains limited by privacy concerns and competitive interests, particularly in regulated domains like healthcare. This dynamic especially disadvantages smaller organizations that lack resources to purchase data or negotiate favorable sharing agreements, due to the inability to \emph{privately} assess external data's utility. To resolve privacy and uncertainty tensions simultaneously, we introduce {\SecureKL}, the first secure protocol for dataset-to-dataset evaluations with zero privacy leakage, designed to be applied preceding data sharing. {\SecureKL} evaluates a source dataset against candidates, performing dataset divergence metrics internally with private computations, all without assuming downstream models. On real-world data, {\SecureKL} achieves high consistency ($>90\%$ correlation with non-private counterparts) and successfully identifies beneficial data collaborations in highly-heterogeneous domains (ICU mortality prediction across hospitals and income prediction across states). Our results highlight that secure computation maximizes data utilization, outperforming privacy-agnostic utility assessments that leak information.
Related papers
- How to DP-fy Your Data: A Practical Guide to Generating Synthetic Data With Differential Privacy [52.00934156883483]
Differential Privacy (DP) is a framework for reasoning about and limiting information leakage.<n>Differentially Private Synthetic data refers to synthetic data that preserves the overall trends of source data.
arXiv Detail & Related papers (2025-12-02T21:14:39Z) - Information-theoretic Estimation of the Risk of Privacy Leaks [0.0]
dependencies between items in a dataset can lead to privacy leaks.<n>We measure the correlation between the original data and their noisy responses from a randomizer as an indicator of potential privacy breaches.
arXiv Detail & Related papers (2025-06-14T03:39:11Z) - Optimal Allocation of Privacy Budget on Hierarchical Data Release [48.96399034594329]
This paper addresses the problem of optimal privacy budget allocation for hierarchical data release.<n>It aims to maximize data utility subject to a total privacy budget while considering the inherent trade-offs between data granularity and privacy loss.
arXiv Detail & Related papers (2025-05-16T05:25:11Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)
FedE4RAG facilitates collaborative training of client-side RAG retrieval models.
We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Federated Learning for Cross-Domain Data Privacy: A Distributed Approach to Secure Collaboration [13.206587690640147]
This paper proposes a data privacy protection framework based on federated learning.
It aims to realize effective cross-domain data collaboration under the premise of ensuring data privacy through distributed learning.
arXiv Detail & Related papers (2025-03-31T23:04:45Z) - DP-CDA: An Algorithm for Enhanced Privacy Preservation in Dataset Synthesis Through Randomized Mixing [0.8739101659113155]
We introduce an effective data publishing algorithm emphDP-CDA.<n>Our proposed algorithm generates synthetic datasets by randomly mixing data in a class-specific manner, and inducing carefully-tuned randomness to ensure privacy guarantees.<n>Our results indicate that synthetic datasets produced using the DP-CDA can achieve superior utility compared to those generated by traditional data publishing algorithms, even when subject to the same privacy requirements.
arXiv Detail & Related papers (2024-11-25T06:14:06Z) - Privacy-Preserving Collaborative Genomic Research: A Real-Life Deployment and Vision [2.7968600664591983]
This paper presents a privacy-preserving framework for genomic research, developed in collaboration with Lynx.MD.
The framework addresses critical cybersecurity and privacy challenges, enabling the privacy-preserving sharing and analysis of genomic data.
Implementing the framework within Lynx.MD involves encoding genomic data into binary formats and applying noise through controlled perturbation techniques.
arXiv Detail & Related papers (2024-07-12T05:43:13Z) - Federated Transfer Learning with Differential Privacy [21.50525027559563]
Federated learning has emerged as a powerful framework for analysing distributed data.<n>In this paper, we aim to enhance learning on a target data set by leveraging information from multiple heterogeneous source data sets.<n>We rigorously formulate the notion of federated differential privacy, which offers privacy guarantees for each data set.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - CaPS: Collaborative and Private Synthetic Data Generation from Distributed Sources [5.898893619901382]
We propose a framework for the collaborative and private generation of synthetic data from distributed data holders.
We replace the trusted aggregator with secure multi-party computation protocols and output privacy via differential privacy (DP)
We demonstrate the applicability and scalability of our approach for the state-of-the-art select-measure-generate algorithms MWEM+PGM and AIM.
arXiv Detail & Related papers (2024-02-13T17:26:32Z) - Libertas: Privacy-Preserving Collective Computation for Decentralised Personal Data Stores [18.91869691495181]
We introduce a modular architecture, Libertas, to integrate MPC with PDS like Solid.
We introduce a paradigm shift from an omniscient' view to individual-based, user-centric view of trust and security.
arXiv Detail & Related papers (2023-09-28T12:07:40Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Collective Privacy Recovery: Data-sharing Coordination via Decentralized
Artificial Intelligence [2.309914459672557]
We show how to automate and scale-up complex collective arrangements for privacy recovery.
We compare for first time attitudinal, intrinsic, rewarded and coordinated data sharing.
Strikingly, data-sharing coordination proves to be a win-win for all.
arXiv Detail & Related papers (2023-01-15T01:36:46Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - A Critical Overview of Privacy-Preserving Approaches for Collaborative
Forecasting [0.0]
Cooperation between different data owners may lead to an improvement in forecast quality.
Due to business competitive factors and personal data protection questions, said data owners might be unwilling to share their data.
This paper analyses the state-of-the-art and unveils several shortcomings of existing methods in guaranteeing data privacy.
arXiv Detail & Related papers (2020-04-20T20:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.