Amplifying Privacy: Scaling Up Transparency Research Through Delegated
Access Requests
- URL: http://arxiv.org/abs/2106.06844v1
- Date: Sat, 12 Jun 2021 19:51:55 GMT
- Title: Amplifying Privacy: Scaling Up Transparency Research Through Delegated
Access Requests
- Authors: Hadi Asghari, Thomas van Biemen, Martijn Warnier
- Abstract summary: We present an alternative method to ask participants to delegate their right of access to the researchers.
We discuss the legal grounds for doing this, the advantages it can bring to both researchers and data subjects, and present a procedural and technical design.
We tested our method in a pilot study in the Netherlands, and found that it creates a win-win for both the researchers and the participants.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, numerous studies have used 'data subject access requests' in
a collective manner, to tackle information asymmetries and shed light on data
collection and privacy practices of organizations. While successful at
increasing transparency, such studies are quite hard to conduct for the simple
fact that right of access is an individual right. This means that researchers
have to recruit participants and guide them through the often-cumbersome
process of access. In this paper, we present an alternative method: to ask
participants to delegate their right of access to the researchers. We discuss
the legal grounds for doing this, the advantages it can bring to both
researchers and data subjects, and present a procedural and technical design to
execute it in a manner that ensures data subjects stay informed and in charge
during the process. We tested our method in a pilot study in the Netherlands,
and found that it creates a win-win for both the researchers and the
participants. We also noted differences in how data controllers from various
sectors react to such requests and discuss some remaining challenges.
Related papers
- AccessShare: Co-designing Data Access and Sharing with Blind People [13.405455952573005]
Blind people are often called to contribute image data to datasets for AI innovation.
Yet, the visual inspection of the contributed images is inaccessible.
To address this gap, we engage 10 blind participants in a scenario where they wear smartglasses and collect image data using an AI-infused application in their homes.
arXiv Detail & Related papers (2024-07-27T23:39:58Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use [6.794366017852433]
This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users.
We developed a data collection tool that prioritizes participant consent and includes an experimental study design.
Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing.
arXiv Detail & Related papers (2024-04-19T20:45:19Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Should I disclose my dataset? Caveats between reproducibility and
individual data rights [5.816090284071069]
Digital availability of court documents increases possibilities for researchers.
However, personal data protection laws impose restrictions on data exposure.
We present legal and ethical considerations on the issue, as well as guidelines for researchers.
arXiv Detail & Related papers (2022-11-01T14:42:11Z) - Exploring and Improving the Accessibility of Data Privacy-related
Information for People Who Are Blind or Low-vision [22.66113008033347]
We present a study of privacy attitudes and behaviors of people who are blind or low vision.
Our study involved in-depth interviews with 21 US participants.
One objective of the study is to better understand this user group's needs for more accessible privacy tools.
arXiv Detail & Related papers (2022-08-21T20:54:40Z) - A Roadmap for Greater Public Use of Privacy-Sensitive Government Data:
Workshop Report [11.431595898012377]
The workshop specifically focused on challenges and successes in government data sharing at various levels.
The first day focused on successful examples of new technology applied to sharing of public data, including formal privacy techniques, synthetic data, and cryptographic approaches.
arXiv Detail & Related papers (2022-06-17T17:20:29Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL
Rolling Review and Beyond [58.71736531356398]
We present an in-depth discussion of peer reviewing data, outline the ethical and legal desiderata for peer reviewing data collection, and propose the first continuous, donation-based data collection workflow.
We report on the ongoing implementation of this workflow at the ACL Rolling Review and deliver the first insights obtained with the newly collected data.
arXiv Detail & Related papers (2022-01-27T11:02:43Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.