"My sex-related data is more sensitive than my financial data and I want
the same level of security and privacy": User Risk Perceptions and Protective
Actions in Female-oriented Technologies
- URL: http://arxiv.org/abs/2306.05956v2
- Date: Wed, 4 Oct 2023 14:36:26 GMT
- Title: "My sex-related data is more sensitive than my financial data and I want
the same level of security and privacy": User Risk Perceptions and Protective
Actions in Female-oriented Technologies
- Authors: Maryam Mehrnezhad and Teresa Almeida
- Abstract summary: Digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health.
FemTech products and systems collect a wide range of intimate data which are processed, saved and shared with other parties.
We explore how the "data-hungry" nature of this industry and the lack of proper safeguarding mechanisms can lead to complex harms or faint agentic potential.
- Score: 6.5268245109828005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The digitalization of the reproductive body has engaged myriads of
cutting-edge technologies in supporting people to know and tackle their
intimate health. Generally understood as female technologies (aka
female-oriented technologies or 'FemTech'), these products and systems collect
a wide range of intimate data which are processed, transferred, saved and
shared with other parties. In this paper, we explore how the "data-hungry"
nature of this industry and the lack of proper safeguarding mechanisms,
standards, and regulations for vulnerable data can lead to complex harms or
faint agentic potential. We adopted mixed methods in exploring users'
understanding of the security and privacy (SP) of these technologies. Our
findings show that while users can speculate the range of harms and risks
associated with these technologies, they are not equipped and provided with the
technological skills to protect themselves against such risks. We discuss a
number of approaches, including participatory threat modelling and SP by
design, in the context of this work and conclude that such approaches are
critical to protect users in these sensitive systems.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Physical and Software Based Fault Injection Attacks Against TEEs in Mobile Devices: A Systemisation of Knowledge [5.6064476854380825]
Trusted Execution Environments (TEEs) are critical components of modern secure computing.
They provide isolated zones in processors to safeguard sensitive data and execute secure operations.
Despite their importance, TEEs are increasingly vulnerable to fault injection (FI) attacks.
arXiv Detail & Related papers (2024-11-22T11:59:44Z) - Security in IS and social engineering -- an overview and state of the art [0.6345523830122166]
The digitization of all processes and the opening to IoT devices has fostered the emergence of a new formof crime, i.e. cybercrime.
The maliciousness of such attacks lies in the fact that they turn users into facilitators of cyber-attacks, to the point of being perceived as the weak link'' of cybersecurity.
Knowing how to anticipate, identifying weak signals and outliers, detect early and react quickly to computer crime are therefore priority issues requiring a prevention and cooperation approach.
arXiv Detail & Related papers (2024-06-17T13:25:27Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy [1.6114012813668934]
This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth)
We explore what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias.
We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
arXiv Detail & Related papers (2022-08-29T00:15:45Z) - SoK: A Framework for Unifying At-Risk User Research [18.216554583064063]
At-risk users are people who experience elevated digital security, privacy, and safety threats because of what they do.
We present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 85 papers.
arXiv Detail & Related papers (2021-12-13T22:27:24Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.