Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use
- URL: http://arxiv.org/abs/2404.13172v2
- Date: Tue, 14 May 2024 16:15:39 GMT
- Title: Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use
- Authors: Alex Berke, Robert Mahari, Sandy Pentland, Kent Larson, D. Calacci,
- Abstract summary: This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users.
We developed a data collection tool that prioritizes participant consent and includes an experimental study design.
Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing.
- Score: 6.794366017852433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data generated by users on digital platforms are a crucial resource for advocates and researchers interested in uncovering digital inequities, auditing algorithms, and understanding human behavior. Yet data access is often restricted. How can researchers both effectively and ethically collect user data? This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users. We developed a data collection tool that prioritizes participant consent and includes an experimental study design. The design allows us to study multiple aspects of privacy perception and data sharing behavior. Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing. Age, race, education, and gender also played a role, where female and less-educated participants were more likely to share. Our study design enables a unique empirical evaluation of the "privacy paradox", where users claim to value their privacy more than they do in practice. We set up both real and hypothetical data sharing scenarios and find measurable similarities and differences in share rates across these contexts. For example, increasing monetary incentives had a 6 times higher impact on share rates in real scenarios. In addition, we study participants' opinions on how data should be used by various third parties, again finding demographics have a significant impact. Notably, the majority of participants disapproved of government agencies using purchase data yet the majority approved of use by researchers. Overall, our findings highlight the critical role that transparency, incentive design, and user demographics play in ethical data collection practices, and provide guidance for future researchers seeking to crowdsource user generated data.
Related papers
- A Public and Reproducible Assessment of the Topics API on Real Data [1.1510009152620668]
The Topics API for the web is Google's privacy-enhancing alternative to replace third-party cookies.
We perform a reproducible assessment of the latest version of the Topics API on the largest and publicly available dataset of real browsing histories.
We find that 46%, 55%, and 60% of the 1207 users in the dataset are uniquely re-identified across websites after only 1, 2, and 3 observations of their topics by advertisers.
arXiv Detail & Related papers (2024-03-28T17:03:44Z) - From Data Creator to Data Reuser: Distance Matters [1.000779758350696]
Investment in data management could be made more wisely by considering who might reuse data, how, why, and when.
Data creators cannot anticipate all possible reuses or reusers.
We develop the theoretical construct of distance between data creator and data reuser.
arXiv Detail & Related papers (2024-02-05T18:16:04Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Contributing to Accessibility Datasets: Reflections on Sharing Study
Data by Blind People [14.625384963263327]
We present a pair of studies where 13 blind participants engage in data capturing activities.
We see how different factors influence blind participants' willingness to share study data as they assess risk-benefit tradeoffs.
The majority support sharing of their data to improve technology but also express concerns over commercial use, associated metadata, and the lack of transparency about the impact of their data.
arXiv Detail & Related papers (2023-03-09T00:42:18Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Security and Privacy Preserving Deep Learning [2.322461721824713]
Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
arXiv Detail & Related papers (2020-06-23T01:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.