"My sex-related data is more sensitive than my financial data and I want
the same level of security and privacy": User Risk Perceptions and Protective
Actions in Female-oriented Technologies
- URL: http://arxiv.org/abs/2306.05956v2
- Date: Wed, 4 Oct 2023 14:36:26 GMT
- Title: "My sex-related data is more sensitive than my financial data and I want
the same level of security and privacy": User Risk Perceptions and Protective
Actions in Female-oriented Technologies
- Authors: Maryam Mehrnezhad and Teresa Almeida
- Abstract summary: Digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health.
FemTech products and systems collect a wide range of intimate data which are processed, saved and shared with other parties.
We explore how the "data-hungry" nature of this industry and the lack of proper safeguarding mechanisms can lead to complex harms or faint agentic potential.
- Score: 6.5268245109828005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The digitalization of the reproductive body has engaged myriads of
cutting-edge technologies in supporting people to know and tackle their
intimate health. Generally understood as female technologies (aka
female-oriented technologies or 'FemTech'), these products and systems collect
a wide range of intimate data which are processed, transferred, saved and
shared with other parties. In this paper, we explore how the "data-hungry"
nature of this industry and the lack of proper safeguarding mechanisms,
standards, and regulations for vulnerable data can lead to complex harms or
faint agentic potential. We adopted mixed methods in exploring users'
understanding of the security and privacy (SP) of these technologies. Our
findings show that while users can speculate the range of harms and risks
associated with these technologies, they are not equipped and provided with the
technological skills to protect themselves against such risks. We discuss a
number of approaches, including participatory threat modelling and SP by
design, in the context of this work and conclude that such approaches are
critical to protect users in these sensitive systems.
Related papers
- Security in IS and social engineering -- an overview and state of the art [0.6345523830122166]
The digitization of all processes and the opening to IoT devices has fostered the emergence of a new formof crime, i.e. cybercrime.
The maliciousness of such attacks lies in the fact that they turn users into facilitators of cyber-attacks, to the point of being perceived as the weak link'' of cybersecurity.
Knowing how to anticipate, identifying weak signals and outliers, detect early and react quickly to computer crime are therefore priority issues requiring a prevention and cooperation approach.
arXiv Detail & Related papers (2024-06-17T13:25:27Z) - You Still See Me: How Data Protection Supports the Architecture of AI Surveillance [5.989015605760986]
We show how privacy-preserving techniques in the development of AI systems can support surveillance infrastructure under the guise of regulatory permissibility.
We propose technology and policy strategies to evaluate privacy-preserving techniques in light of the protections they actually confer.
arXiv Detail & Related papers (2024-02-09T18:39:29Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy [1.6114012813668934]
This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth)
We explore what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias.
We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
arXiv Detail & Related papers (2022-08-29T00:15:45Z) - SoK: A Framework for Unifying At-Risk User Research [18.216554583064063]
At-risk users are people who experience elevated digital security, privacy, and safety threats because of what they do.
We present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 85 papers.
arXiv Detail & Related papers (2021-12-13T22:27:24Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems [8.080564346335542]
Digital contact tracing (DCT) technology has helped to slow the spread of infectious disease.
To support both ethical technology deployment and user adoption, privacy must be at the forefront.
With the loss of privacy being a critical threat, thorough threat modeling will help us to strategize and protect privacy as DCT technologies advance.
arXiv Detail & Related papers (2020-09-25T02:09:51Z) - Epidemic mitigation by statistical inference from contact tracing data [61.04165571425021]
We develop Bayesian inference methods to estimate the risk that an individual is infected.
We propose to use probabilistic risk estimation in order to optimize testing and quarantining strategies for the control of an epidemic.
Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact.
arXiv Detail & Related papers (2020-09-20T12:24:45Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.