FakeSafe: Human Level Data Protection by Disinformation Mapping using
Cycle-consistent Adversarial Network
- URL: http://arxiv.org/abs/2011.11278v2
- Date: Thu, 10 Dec 2020 04:10:21 GMT
- Title: FakeSafe: Human Level Data Protection by Disinformation Mapping using
Cycle-consistent Adversarial Network
- Authors: He Zhu and Dianbo Liu
- Abstract summary: disinformation strategy can be adapted into data science to protect valuable private and sensitive data.
Huge amount of private data are being generated from personal devices such as smart phone and wearable.
Building a secure data transfer and storage infrastructure to preserving privacy is costly in most cases and there is always a concern of data security due to human errors.
We propose a method, named FakeSafe, to provide human level data protection using generative adversarial network with cycle consistency.
- Score: 4.987581730476023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The concept of disinformation is to use fake messages to confuse people in
order to protect the real information. This strategy can be adapted into data
science to protect valuable private and sensitive data. Huge amount of private
data are being generated from personal devices such as smart phone and wearable
in recent years. Being able to utilize these personal data will bring big
opportunities to design personalized products, conduct precision healthcare and
many other tasks that were impossible in the past. However, due to privacy,
safety and regulation reasons, it is often difficult to transfer or store data
in its original form while keeping them safe. Building a secure data transfer
and storage infrastructure to preserving privacy is costly in most cases and
there is always a concern of data security due to human errors. In this study,
we propose a method, named FakeSafe, to provide human level data protection
using generative adversarial network with cycle consistency and conducted
experiments using both benchmark and real world data sets to illustrate
potential applications of FakeSafe.
Related papers
- FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation [4.772368796656325]
In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments.
We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task.
arXiv Detail & Related papers (2024-10-30T02:41:26Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - AI-Driven Anonymization: Protecting Personal Data Privacy While
Leveraging Machine Learning [5.015409508372732]
This paper focuses on personal data privacy protection and the promotion of anonymity as its core research objectives.
It achieves personal data privacy protection and detection through the use of machine learning's differential privacy protection algorithm.
The paper also addresses existing challenges in machine learning related to privacy and personal data protection, offers improvement suggestions, and analyzes factors impacting datasets to enable timely personal data privacy detection and protection.
arXiv Detail & Related papers (2024-02-27T04:12:25Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation [20.58763760239068]
Crowdsource celluar signal strength measurements can be used to generate signal maps to improve network performance.
We consider obfuscating such data before the data leaves the mobile device.
Our evaluation results, based on multiple, diverse, real-world signal map datasets, demonstrate the feasibility of concurrently achieving adequate privacy and utility.
arXiv Detail & Related papers (2022-01-13T03:46:22Z) - Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
Dataset Release [52.504589728136615]
We develop a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it.
We demonstrate the success of our approach onImageNet classification and on facial recognition.
arXiv Detail & Related papers (2021-02-16T19:12:34Z) - BeeTrace: A Unified Platform for Secure Contact Tracing that Breaks Data
Silos [73.84437456144994]
Contact tracing is an important method to control the spread of an infectious disease such as COVID-19.
Current solutions do not utilize the huge volume of data stored in business databases and individual digital devices.
We propose BeeTrace, a unified platform that breaks data silos and deploys state-of-the-art cryptographic protocols to guarantee privacy goals.
arXiv Detail & Related papers (2020-07-05T10:33:45Z) - Security and Privacy Preserving Deep Learning [2.322461721824713]
Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
arXiv Detail & Related papers (2020-06-23T01:53:46Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z) - Utility-aware Privacy-preserving Data Releasing [7.462336024223669]
We propose a two-step perturbation-based privacy-preserving data releasing framework.
First, certain predefined privacy and utility problems are learned from the public domain data.
We then leverage the learned knowledge to precisely perturb the data owners' data into privatized data.
arXiv Detail & Related papers (2020-05-09T05:32:46Z) - Beyond privacy regulations: an ethical approach to data usage in
transportation [64.86110095869176]
We describe how Federated Machine Learning can be applied to the transportation sector.
We see Federated Learning as a method that enables us to process privacy-sensitive data, while respecting customer's privacy.
arXiv Detail & Related papers (2020-04-01T15:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.