Embedding Privacy in Computational Social Science and Artificial Intelligence Research
- URL: http://arxiv.org/abs/2404.11515v2
- Date: Mon, 3 Jun 2024 14:32:04 GMT
- Title: Embedding Privacy in Computational Social Science and Artificial Intelligence Research
- Authors: Keenan Jones, Fatima Zahrah, Jason R. C. Nurse,
- Abstract summary: Preserving privacy has emerged as a critical factor in research.
The increasing use of advanced computational models stands to exacerbate privacy concerns.
This article contributes to the field by discussing the role of privacy and the issues that researchers working in CSS, AI, data science and related domains are likely to face.
- Score: 2.048226951354646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy is a human right. It ensures that individuals are free to engage in discussions, participate in groups, and form relationships online or offline without fear of their data being inappropriately harvested, analyzed, or otherwise used to harm them. Preserving privacy has emerged as a critical factor in research, particularly in the computational social science (CSS), artificial intelligence (AI) and data science domains, given their reliance on individuals' data for novel insights. The increasing use of advanced computational models stands to exacerbate privacy concerns because, if inappropriately used, they can quickly infringe privacy rights and lead to adverse effects for individuals -- especially vulnerable groups -- and society. We have already witnessed a host of privacy issues emerge with the advent of large language models (LLMs), such as ChatGPT, which further demonstrate the importance of embedding privacy from the start. This article contributes to the field by discussing the role of privacy and the issues that researchers working in CSS, AI, data science and related domains are likely to face. It then presents several key considerations for researchers to ensure participant privacy is best preserved in their research design, data collection and use, analysis, and dissemination of research results.
Related papers
- Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory [43.12744258781724]
We formulate the privacy issue as a reasoning problem rather than simple pattern matching.
We develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations.
arXiv Detail & Related papers (2024-08-19T14:48:04Z) - Privacy-Preserving Collaborative Genomic Research: A Real-Life Deployment and Vision [2.7968600664591983]
This paper presents a privacy-preserving framework for genomic research, developed in collaboration with Lynx.MD.
The framework addresses critical cybersecurity and privacy challenges, enabling the privacy-preserving sharing and analysis of genomic data.
Implementing the framework within Lynx.MD involves encoding genomic data into binary formats and applying noise through controlled perturbation techniques.
arXiv Detail & Related papers (2024-07-12T05:43:13Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [55.20137833039499]
We suggest sanitizing sensitive text using two common strategies used by humans.
We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Human-Centered Privacy Research in the Age of Large Language Models [31.379232599019915]
This SIG aims to bring together researchers with backgrounds in usable security and privacy, human-AI collaboration, NLP, or any other related domains to share their perspectives and experiences on this problem.
arXiv Detail & Related papers (2024-02-03T02:32:45Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - User Perception of Privacy with Ubiquitous Devices [5.33024001730262]
This study aims to explore and discover various concerns related to perception of privacy in this era of ubiquitous technologies.
Key themes like attitude towards privacy in public and private spaces, privacy awareness, consent seeking, dilemmas/confusions related to various technologies, impact of attitude and beliefs on individuals actions regarding how to protect oneself from invasion of privacy in both public and private spaces.
arXiv Detail & Related papers (2021-07-23T05:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.