Data-Driven Dystopia: an uninterrupted breach of ethics
- URL: http://arxiv.org/abs/2305.07934v1
- Date: Sat, 13 May 2023 14:56:18 GMT
- Title: Data-Driven Dystopia: an uninterrupted breach of ethics
- Authors: Shreyansh Padarha
- Abstract summary: The article presents instances of data breaches and data harvesting practices that violate user privacy.
It also explores the concept of "Weapons Of Math Destruction" (WMDs)
The article highlights the need for companies to take responsibility for safeguarding user information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article discusses the risks and complexities associated with the
exponential rise in data and the misuse of data by large corporations. The
article presents instances of data breaches and data harvesting practices that
violate user privacy. It also explores the concept of "Weapons Of Math
Destruction" (WMDs), which refers to big data models that perpetuate inequality
and discrimination. The article highlights the need for companies to take
responsibility for safeguarding user information and the ethical use of data
models, AI, and ML. The article also emphasises the significance of data
privacy for individuals in their daily lives and the need for a more conscious
and responsible approach towards data management.
Related papers
- Root causes, ongoing difficulties, proactive prevention techniques, and emerging trends of enterprise data breaches [0.0]
Businesses now consider data to be a crucial asset, and any breach of this data can have dire repercussions.
Enterprises now place a high premium on detecting and preventing data loss due to the growing amount of data and the increasing frequency of data breaches.
This review attempts to highlight interesting prospects and offer insightful information to those who are interested in learning about the risks that businesses face from data leaks.
arXiv Detail & Related papers (2023-11-27T20:34:10Z) - The Use of Synthetic Data to Train AI Models: Opportunities and Risks
for Sustainable Development [0.6906005491572401]
This paper investigates the policies governing the creation, utilization, and dissemination of synthetic data.
A well crafted synthetic data policy must strike a balance between privacy concerns and the utility of data.
arXiv Detail & Related papers (2023-08-31T23:18:53Z) - Towards Generalizable Data Protection With Transferable Unlearnable
Examples [50.628011208660645]
We present a novel, generalizable data protection method by generating transferable unlearnable examples.
To the best of our knowledge, this is the first solution that examines data privacy from the perspective of data distribution.
arXiv Detail & Related papers (2023-05-18T04:17:01Z) - Synthetic Data: Methods, Use Cases, and Risks [11.413309528464632]
A possible alternative gaining momentum in both the research community and industry is to share synthetic data instead.
We provide a gentle introduction to synthetic data and discuss its use cases, the privacy challenges that are still unaddressed, and its inherent limitations as an effective privacy-enhancing technology.
arXiv Detail & Related papers (2023-03-01T16:35:33Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Certified Data Removal in Sum-Product Networks [78.27542864367821]
Deleting the collected data is often insufficient to guarantee data privacy.
UnlearnSPN is an algorithm that removes the influence of single data points from a trained sum-product network.
arXiv Detail & Related papers (2022-10-04T08:22:37Z) - Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
Dataset Release [52.504589728136615]
We develop a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it.
We demonstrate the success of our approach onImageNet classification and on facial recognition.
arXiv Detail & Related papers (2021-02-16T19:12:34Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z) - Security and Privacy Preserving Deep Learning [2.322461721824713]
Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
arXiv Detail & Related papers (2020-06-23T01:53:46Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.