SoK: Safer Digital-Safety Research Involving At-Risk Users
- URL: http://arxiv.org/abs/2309.00735v1
- Date: Fri, 1 Sep 2023 21:15:39 GMT
- Title: SoK: Safer Digital-Safety Research Involving At-Risk Users
- Authors: Rosanna Bellini, Emily Tseng, Noel Warford, Alaa Daffalla, Tara
Matthews, Sunny Consolvo, Jill Palzkill Woelfer, Patrick Gage Kelley,
Michelle L. Mazurek, Dana Cuomo, Nicola Dell, and Thomas Ristenpart
- Abstract summary: Pursuing research in computer security and privacy is crucial to understanding how to meet the digital-safety needs of at-risk users.
We offer an analysis of 196 academic works to elicit 14 research risks and 36 safety practices used by a growing community of researchers.
We conclude by suggesting areas for future research regarding the reporting, study, and funding of at-risk user research.
- Score: 43.45078079505055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research involving at-risk users -- that is, users who are more likely to
experience a digital attack or to be disproportionately affected when harm from
such an attack occurs -- can pose significant safety challenges to both users
and researchers. Nevertheless, pursuing research in computer security and
privacy is crucial to understanding how to meet the digital-safety needs of
at-risk users and to design safer technology for all. To standardize and
bolster safer research involving such users, we offer an analysis of 196
academic works to elicit 14 research risks and 36 safety practices used by a
growing community of researchers. We pair this inconsistent set of reported
safety practices with oral histories from 12 domain experts to contribute
scaffolded and consolidated pragmatic guidance that researchers can use to
plan, execute, and share safer digital-safety research involving at-risk users.
We conclude by suggesting areas for future research regarding the reporting,
study, and funding of at-risk user research
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Insider Threats Mitigation: Role of Penetration Testing [0.0]
This study aims to improve the knowledge of penetration testing as a critical part of insider threat defense.
We look at how penetration testing is used in different industries, present case studies with real-world implementations, and discuss the obstacles and constraints that businesses must overcome.
arXiv Detail & Related papers (2024-07-24T15:14:48Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Beyond the Safeguards: Exploring the Security Risks of ChatGPT [3.1981440103815717]
Increasing popularity of large language models (LLMs) has led to growing concerns about their safety, security risks, and ethical implications.
This paper aims to provide an overview of the different types of security risks associated with ChatGPT, including malicious text and code generation, private data disclosure, fraudulent services, information gathering, and producing unethical content.
arXiv Detail & Related papers (2023-05-13T21:01:14Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Getting Users Smart Quick about Security: Results from 90 Minutes of
Using a Persuasive Toolkit for Facilitating Information Security Problem
Solving by Non-Professionals [2.4923006485141284]
A balanced level of user engagement in security is difficult to achieve due to difference of priorities between the business perspective and the security perspective.
We have developed a persuasive software toolkit to engage users in structured discussions about security vulnerabilities in their company.
In the research reported here we examine how non-professionals perceived security problems through a short-term use of the toolkit.
arXiv Detail & Related papers (2022-09-06T11:37:21Z) - SoK: A Framework for Unifying At-Risk User Research [18.216554583064063]
At-risk users are people who experience elevated digital security, privacy, and safety threats because of what they do.
We present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 85 papers.
arXiv Detail & Related papers (2021-12-13T22:27:24Z) - Human Factors in Security Research: Lessons Learned from 2008-2018 [8.255966566768484]
We focus our analysis on the research on the crucial population of experts, whose human errors can impact many systems at once.
We analyzed the past decade of human factors research in security and privacy, identifying 557 relevant publications.
arXiv Detail & Related papers (2021-03-24T15:58:05Z) - Epidemic mitigation by statistical inference from contact tracing data [61.04165571425021]
We develop Bayesian inference methods to estimate the risk that an individual is infected.
We propose to use probabilistic risk estimation in order to optimize testing and quarantining strategies for the control of an epidemic.
Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact.
arXiv Detail & Related papers (2020-09-20T12:24:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.