Root causes, ongoing difficulties, proactive prevention techniques, and emerging trends of enterprise data breaches
- URL: http://arxiv.org/abs/2311.16303v1
- Date: Mon, 27 Nov 2023 20:34:10 GMT
- Title: Root causes, ongoing difficulties, proactive prevention techniques, and emerging trends of enterprise data breaches
- Authors: Rina Patil, Gayatri Pise, Yatin Bhosale,
- Abstract summary: Businesses now consider data to be a crucial asset, and any breach of this data can have dire repercussions.
Enterprises now place a high premium on detecting and preventing data loss due to the growing amount of data and the increasing frequency of data breaches.
This review attempts to highlight interesting prospects and offer insightful information to those who are interested in learning about the risks that businesses face from data leaks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A data breach in the modern digital era is the unintentional or intentional disclosure of private data to uninvited parties. Businesses now consider data to be a crucial asset, and any breach of this data can have dire repercussions, including harming a company's brand and resulting in losses. Enterprises now place a high premium on detecting and preventing data loss due to the growing amount of data and the increasing frequency of data breaches. Even with a great deal of research, protecting sensitive data is still a difficult task. This review attempts to highlight interesting prospects and offer insightful information to those who are interested in learning about the risks that businesses face from data leaks, current occurrences, state-of-the-art methods for detection and prevention, new difficulties, and possible solutions.
Related papers
- Investigating Vulnerabilities of GPS Trip Data to Trajectory-User Linking Attacks [49.1574468325115]
We propose a novel attack to reconstruct user identifiers in GPS trip datasets consisting of single trips.
We show that the risk of re-identification is significant even when personal identifiers have been removed.
Further investigations indicate that users who frequently visit locations that are only visited by a small number of others tend to be more vulnerable to re-identification.
arXiv Detail & Related papers (2025-02-12T08:54:49Z) - Towards Data Governance of Frontier AI Models [0.0]
We look at how data can enable new governance capacities for frontier AI models.
Data is non-rival, often non-excludable, easily replicable, and increasingly synthesizable.
We propose a set of policy mechanisms targeting key actors along the data supply chain.
arXiv Detail & Related papers (2024-12-05T02:37:51Z) - A Customer Level Fraudulent Activity Detection Benchmark for Enhancing Machine Learning Model Research and Evaluation [0.4681661603096334]
This study introduces a benchmark that contains structured datasets specifically designed for customer-level fraud detection.
The benchmark not only adheres to strict privacy guidelines to ensure user confidentiality but also provides a rich source of information by encapsulating customer-centric features.
arXiv Detail & Related papers (2024-04-23T04:57:44Z) - Towards Generalizable Data Protection With Transferable Unlearnable
Examples [50.628011208660645]
We present a novel, generalizable data protection method by generating transferable unlearnable examples.
To the best of our knowledge, this is the first solution that examines data privacy from the perspective of data distribution.
arXiv Detail & Related papers (2023-05-18T04:17:01Z) - Stop Uploading Test Data in Plain Text: Practical Strategies for
Mitigating Data Contamination by Evaluation Benchmarks [70.39633252935445]
Data contamination has become prevalent and challenging with the rise of models pretrained on large automatically-crawled corpora.
For closed models, the training data becomes a trade secret, and even for open models, it is not trivial to detect contamination.
We propose three strategies that can make a difference: (1) Test data made public should be encrypted with a public key and licensed to disallow derivative distribution; (2) demand training exclusion controls from closed API holders, and protect your test data by refusing to evaluate without them; and (3) avoid data which appears with its solution on the internet, and release the web-page context of internet-derived
arXiv Detail & Related papers (2023-05-17T12:23:38Z) - Data-Driven Dystopia: an uninterrupted breach of ethics [0.0]
The article presents instances of data breaches and data harvesting practices that violate user privacy.
It also explores the concept of "Weapons Of Math Destruction" (WMDs)
The article highlights the need for companies to take responsibility for safeguarding user information.
arXiv Detail & Related papers (2023-05-13T14:56:18Z) - Data Poisoning Attacks and Defenses to Crowdsourcing Systems [26.147716118854614]
We show that crowdsourcing is vulnerable to data poisoning attacks.
malicious clients provide carefully crafted data to corrupt the aggregated data.
We propose two defenses to reduce the impact of malicious clients.
arXiv Detail & Related papers (2021-02-18T06:03:48Z) - Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
Dataset Release [52.504589728136615]
We develop a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it.
We demonstrate the success of our approach onImageNet classification and on facial recognition.
arXiv Detail & Related papers (2021-02-16T19:12:34Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Leaking Sensitive Financial Accounting Data in Plain Sight using Deep
Autoencoder Neural Networks [1.9659095632676094]
We introduce a real-world threat model' designed to leak sensitive accounting data.
We show that a deep steganographic process, constituted by three neural networks, can be trained to hide such data in unobtrusive day-to-day' images.
arXiv Detail & Related papers (2020-12-13T17:29:53Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.