Contributing to Accessibility Datasets: Reflections on Sharing Study
Data by Blind People
- URL: http://arxiv.org/abs/2303.04962v1
- Date: Thu, 9 Mar 2023 00:42:18 GMT
- Title: Contributing to Accessibility Datasets: Reflections on Sharing Study
Data by Blind People
- Authors: Rie Kamikubo, Kyungjun Lee, Hernisa Kacorri
- Abstract summary: We present a pair of studies where 13 blind participants engage in data capturing activities.
We see how different factors influence blind participants' willingness to share study data as they assess risk-benefit tradeoffs.
The majority support sharing of their data to improve technology but also express concerns over commercial use, associated metadata, and the lack of transparency about the impact of their data.
- Score: 14.625384963263327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To ensure that AI-infused systems work for disabled people, we need to bring
accessibility datasets sourced from this community in the development
lifecycle. However, there are many ethical and privacy concerns limiting
greater data inclusion, making such datasets not readily available. We present
a pair of studies where 13 blind participants engage in data capturing
activities and reflect with and without probing on various factors that
influence their decision to share their data via an AI dataset. We see how
different factors influence blind participants' willingness to share study data
as they assess risk-benefit tradeoffs. The majority support sharing of their
data to improve technology but also express concerns over commercial use,
associated metadata, and the lack of transparency about the impact of their
data. These insights have implications for the development of responsible
practices for stewarding accessibility datasets, and can contribute to broader
discussions in this area.
Related papers
- AccessShare: Co-designing Data Access and Sharing with Blind People [13.405455952573005]
Blind people are often called to contribute image data to datasets for AI innovation.
Yet, the visual inspection of the contributed images is inaccessible.
To address this gap, we engage 10 blind participants in a scenario where they wear smartglasses and collect image data using an AI-infused application in their homes.
arXiv Detail & Related papers (2024-07-27T23:39:58Z) - Lazy Data Practices Harm Fairness Research [49.02318458244464]
We present a comprehensive analysis of fair ML datasets, demonstrating how unreflective practices hinder the reach and reliability of algorithmic fairness findings.
Our analyses identify three main areas of concern: (1) a textbflack of representation for certain protected attributes in both data and evaluations; (2) the widespread textbf of minorities during data preprocessing; and (3) textbfopaque data processing threatening the generalization of fairness research.
This study underscores the need for a critical reevaluation of data practices in fair ML and offers directions to improve both the sourcing and usage of datasets.
arXiv Detail & Related papers (2024-04-26T09:51:24Z) - Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use [6.794366017852433]
This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users.
We developed a data collection tool that prioritizes participant consent and includes an experimental study design.
Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing.
arXiv Detail & Related papers (2024-04-19T20:45:19Z) - Data Acquisition: A New Frontier in Data-centric AI [65.90972015426274]
We first present an investigation of current data marketplaces, revealing lack of platforms offering detailed information about datasets.
We then introduce the DAM challenge, a benchmark to model the interaction between the data providers and acquirers.
Our evaluation of the submitted strategies underlines the need for effective data acquisition strategies in Machine Learning.
arXiv Detail & Related papers (2023-11-22T22:15:17Z) - Towards Generalizable Data Protection With Transferable Unlearnable
Examples [50.628011208660645]
We present a novel, generalizable data protection method by generating transferable unlearnable examples.
To the best of our knowledge, this is the first solution that examines data privacy from the perspective of data distribution.
arXiv Detail & Related papers (2023-05-18T04:17:01Z) - Exploring and Improving the Accessibility of Data Privacy-related
Information for People Who Are Blind or Low-vision [22.66113008033347]
We present a study of privacy attitudes and behaviors of people who are blind or low vision.
Our study involved in-depth interviews with 21 US participants.
One objective of the study is to better understand this user group's needs for more accessible privacy tools.
arXiv Detail & Related papers (2022-08-21T20:54:40Z) - Data Representativeness in Accessibility Datasets: A Meta-Analysis [7.6597163467929805]
We review datasets sourced by people with disabilities and older adults.
We find that accessibility datasets represent diverse ages, but have gender and race representation gaps.
We hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.
arXiv Detail & Related papers (2022-07-16T23:32:19Z) - Lessons from the AdKDD'21 Privacy-Preserving ML Challenge [57.365745458033075]
A prominent proposal at W3C only allows sharing advertising signals through aggregated, differentially private reports of past displays.
To study this proposal extensively, an open Privacy-Preserving Machine Learning Challenge took place at AdKDD'21.
A key finding is that learning models on large, aggregated data in the presence of a small set of unaggregated data points can be surprisingly efficient and cheap.
arXiv Detail & Related papers (2022-01-31T11:09:59Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.