SoK: A Framework for Unifying At-Risk User Research
- URL: http://arxiv.org/abs/2112.07047v1
- Date: Mon, 13 Dec 2021 22:27:24 GMT
- Title: SoK: A Framework for Unifying At-Risk User Research
- Authors: Noel Warford and Tara Matthews and Kaitlyn Yang and Omer Akgul and
Sunny Consolvo and Patrick Gage Kelley and Nathan Malkin and Michelle L.
Mazurek and Manya Sleeper and Kurt Thomas
- Abstract summary: At-risk users are people who experience elevated digital security, privacy, and safety threats because of what they do.
We present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 85 papers.
- Score: 18.216554583064063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At-risk users are people who experience elevated digital security, privacy,
and safety threats because of what they do, who they are, where they are, or
who they are with. In this systematization work, we present a framework for
reasoning about at-risk users based on a wide-ranging meta-analysis of 85
papers. Across the varied populations that we examined (e.g., children,
activists, women in developing regions), we identified 10 unifying contextual
risk factors--such as oppression or stigmatization and access to a sensitive
resource--which augment or amplify digital-safety threats and their resulting
harms. We also identified technical and non-technical practices that at-risk
users adopt to attempt to protect themselves from digital-safety threats. We
use this framework to discuss barriers that limit at-risk users' ability or
willingness to take protective actions. We believe that the security, privacy,
and human-computer interaction research and practitioner communities can use
our framework to identify and shape research investments to benefit at-risk
users, and to guide technology design to better support at-risk users.
Related papers
- A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis [0.6199770411242359]
This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
arXiv Detail & Related papers (2024-09-17T14:18:21Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models [5.077431021127288]
This paper addresses a gap in current research by focusing on security risks posed by large language models (LLMs)
Our work proposes a taxonomy of security risks along the user-model communication pipeline and categorizes the attacks by target and attack type alongside the commonly used confidentiality, integrity, and availability (CIA) triad.
arXiv Detail & Related papers (2023-11-19T20:22:05Z) - SoK: Safer Digital-Safety Research Involving At-Risk Users [43.45078079505055]
Pursuing research in computer security and privacy is crucial to understanding how to meet the digital-safety needs of at-risk users.
We offer an analysis of 196 academic works to elicit 14 research risks and 36 safety practices used by a growing community of researchers.
We conclude by suggesting areas for future research regarding the reporting, study, and funding of at-risk user research.
arXiv Detail & Related papers (2023-09-01T21:15:39Z) - "My sex-related data is more sensitive than my financial data and I want
the same level of security and privacy": User Risk Perceptions and Protective
Actions in Female-oriented Technologies [6.5268245109828005]
Digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health.
FemTech products and systems collect a wide range of intimate data which are processed, saved and shared with other parties.
We explore how the "data-hungry" nature of this industry and the lack of proper safeguarding mechanisms can lead to complex harms or faint agentic potential.
arXiv Detail & Related papers (2023-06-09T15:16:30Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.