Panel: Humans and Technology for Inclusive Privacy and Security
- URL: http://arxiv.org/abs/2101.07377v1
- Date: Mon, 18 Jan 2021 23:35:42 GMT
- Title: Panel: Humans and Technology for Inclusive Privacy and Security
- Authors: Sanchari Das and Robert S. Gutzwiller and Rod D. Roscoe and Prashanth
Rajivan and Yang Wang and L. Jean Camp and Roberto Hoyle
- Abstract summary: Separate issues arise between generic guidance (i.e., protect all user data from malicious threats) and the approach of privacy.
The panel will focus on potentially vulnerable populations, such as older adults, teens, persons with disabilities, and others who are not typically emphasized in general security and privacy concerns.
- Score: 7.5852661910985795
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Computer security and user privacy are critical issues and concerns in the
digital era due to both increasing users and threats to their data. Separate
issues arise between generic cybersecurity guidance (i.e., protect all user
data from malicious threats) and the individualistic approach of privacy (i.e.,
specific to users and dependent on user needs and risk perceptions). Research
has shown that several security- and privacy-focused vulnerabilities are
technological (e.g., software bugs (Streiff, Kenny, Das, Leeth, & Camp, 2018),
insecure authentication (Das, Wang, Tingle, & Camp, 2019)), or behavioral
(e.g., sharing passwords (Das, Dingman, & Camp, 2018); and compliance (Das,
Dev, & Srinivasan, 2018) (Dev, Das, Rashidi, & Camp, 2019)). This panel
proposal addresses a third category of sociotechnical vulnerabilities that can
and sometimes do arise from non-inclusive design of security and privacy. In
this panel, we will address users' needs and desires for privacy. The panel
will engage in in-depth discussions about value-sensitive design while focusing
on potentially vulnerable populations, such as older adults, teens, persons
with disabilities, and others who are not typically emphasized in general
security and privacy concerns. Human factors have a stake in and ability to
facilitate improvements in these areas.
Related papers
- Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms [3.789219860006095]
We conduct a large-scale analysis of over 2.5M user posts from the r/ChatGPT Reddit community to understand users' security and privacy concerns.
We find that users are concerned about each stage of the data lifecycle (i.e., collection, usage, and retention)
We provide recommendations for users, platforms, enterprises, and policymakers to enhance transparency, improve data controls, and increase user trust and adoption.
arXiv Detail & Related papers (2025-04-09T03:22:48Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy Analysis Layers [0.7918886297003017]
Analyzing privacy threats in software products is an essential part of software development to ensure systems are privacy-respecting.
We propose to use four conceptual layers (feature, ecosystem, business context, and environment) to capture this privacy complexity.
These layers can be used as a frame to structure and specify the privacy analysis support in a more tangible and actionable way.
arXiv Detail & Related papers (2024-08-07T06:30:20Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Security and Privacy Product Inclusion [2.0005856037535823]
We propose a threat modeling approach to identify potential risks and countermeasures for product inclusion in security and privacy.
We discuss various factors that can affect a user's ability to achieve a high level of security and privacy, including low-income demographics, poor connectivity, shared device usage, ML fairness, etc.
arXiv Detail & Related papers (2024-04-20T00:36:54Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Evaluating the Security and Privacy Risk Postures of Virtual Assistants [3.1943453294492543]
We evaluated the security and privacy postures of eight widely used voice assistants: Alexa, Braina, Cortana, Google Assistant, Kalliope, Mycroft, Hound, and Extreme.
Results revealed that these VAs are vulnerable to a range of security threats.
These vulnerabilities could allow malicious actors to gain unauthorized access to users' personal information.
arXiv Detail & Related papers (2023-12-22T12:10:52Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Usable, Acceptable, Appropriable: Towards Practicable Privacy [2.0305676256390934]
This paper explores the privacy needs of marginalized and vulnerable populations.
We introduce computers and the Internet to a group of sex-trafficking survivors in Nepal.
We highlight a few socio-political factors that have influenced the design space around digital privacy.
arXiv Detail & Related papers (2020-04-15T21:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.