PriveShield: Enhancing User Privacy Using Automatic Isolated Profiles in Browsers
- URL: http://arxiv.org/abs/2501.02091v1
- Date: Fri, 03 Jan 2025 20:29:33 GMT
- Title: PriveShield: Enhancing User Privacy Using Automatic Isolated Profiles in Browsers
- Authors: Seyed Ali Akhavani, Engin Kirda, Amin Kharraz,
- Abstract summary: PriveShield is a light-weight privacy mechanism that disrupts the information gathering cycle.
Our evaluation results show that our extension is effective in preventing retargeted ads in 91% of those scenarios.
- Score: 3.9251831157293515
- License:
- Abstract: Online tracking is a widespread practice on the web with questionable ethics, security, and privacy concerns. While web tracking can offer personalized and curated content to Internet users, it operates as a sophisticated surveillance mechanism to gather extensive user information. This paper introduces PriveShield, a light-weight privacy mechanism that disrupts the information gathering cycle while offering more control to Internet users to maintain their privacy. PriveShield is implemented as a browser extension that offers an adjustable privacy feature to surf the web with multiple identities or accounts simultaneously without any changes to underlying browser code or services. When necessary, multiple factors are automatically analyzed on the client side to isolate cookies and other information that are the basis of online tracking. PriveShield creates isolated profiles for clients based on their browsing history, interactions with websites, and the amount of time they spend on specific websites. This allows the users to easily prevent unwanted browsing information from being shared with third parties and ad exchanges without the need for manual configuration. Our evaluation results from 54 real-world scenarios show that our extension is effective in preventing retargeted ads in 91% of those scenarios.
Related papers
- On the Differential Privacy and Interactivity of Privacy Sandbox Reports [78.21466601986265]
The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities.
We provide a formal model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee.
arXiv Detail & Related papers (2024-12-22T08:22:57Z) - Web Privacy based on Contextual Integrity: Measuring the Collapse of Online Contexts [0.0]
We operationalize the theory of Privacy as Contextual Integrity and measure persistent user identification within and between Web contexts.
We crawl the top-700 popular websites across the contexts of health, finance, news & media, LGBTQ, eCommerce, adult, and education websites, for 27 days.
Our findings reveal how persistent browser identification varies between and within contexts, diffusing user IDs to different distances, contrasting known tracking distributions across websites, and conducted as a joint or separate effort via cookie IDs and JS fingerprinting.
arXiv Detail & Related papers (2024-12-19T23:30:29Z) - Inference Privacy: Properties and Mechanisms [8.471466670802817]
Inference Privacy (IP) can allow a user to interact with a model while providing a rigorous privacy guarantee for the users' data at inference.
We present two types of mechanisms for achieving IP: namely, input perturbations and output perturbations which are customizable by the users.
arXiv Detail & Related papers (2024-11-27T20:47:28Z) - Fingerprinting and Tracing Shadows: The Development and Impact of Browser Fingerprinting on Digital Privacy [55.2480439325792]
Browser fingerprinting is a growing technique for identifying and tracking users online without traditional methods like cookies.
This paper gives an overview by examining the various fingerprinting techniques and analyzes the entropy and uniqueness of the collected data.
arXiv Detail & Related papers (2024-11-18T20:32:31Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - Evaluating Google's Protected Audience Protocol [7.737740676767729]
Google has proposed the Privacy Sandbox initiative to enable ad targeting without third-party cookies.
This work focuses on analyzing linkage privacy risks for the reporting mechanisms proposed in the Protected Audience proposal.
arXiv Detail & Related papers (2024-05-13T18:28:56Z) - Characterizing Browser Fingerprinting and its Mitigations [0.0]
This work explores one of these tracking techniques: browser fingerprinting.
We detail how browser fingerprinting works, how prevalent it is, and what defenses can mitigate it.
arXiv Detail & Related papers (2023-10-12T20:31:24Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.