The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on Privacy as Code
- URL: http://arxiv.org/abs/2412.16667v1
- Date: Sat, 21 Dec 2024 15:30:17 GMT
- Title: The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on Privacy as Code
- Authors: Nicolás E. Díaz Ferreyra, Sirine Khelifi, Nalin Arachchilage, Riccardo Scandariato,
- Abstract summary: Privacy and security are central to the design of information systems endowed with sound data protection and cyber resilience capabilities.
Developers often struggle to incorporate these properties into software projects as they either lack proper cybersecurity training or do not consider them a priority.
- Score: 4.479352653343731
- License:
- Abstract: Privacy and security are central to the design of information systems endowed with sound data protection and cyber resilience capabilities. Still, developers often struggle to incorporate these properties into software projects as they either lack proper cybersecurity training or do not consider them a priority. Prior work has tried to support privacy and security engineering activities through threat modeling methods for scrutinizing flaws in system architectures. Moreover, several techniques for the automatic identification of vulnerabilities and the generation of secure code implementations have also been proposed in the current literature. Conversely, such as-code approaches seem under-investigated in the privacy domain, with little work elaborating on (i) the automatic detection of privacy properties in source code or (ii) the generation of privacy-friendly code. In this work, we seek to characterize the current research landscape of Privacy as Code (PaC) methods and tools by conducting a rapid literature review. Our results suggest that PaC research is in its infancy, especially regarding the performance evaluation and usability assessment of the existing approaches. Based on these findings, we outline and discuss prospective research directions concerning empirical studies with software practitioners, the curation of benchmark datasets, and the role of generative AI technologies.
Related papers
- A Survey on Private Transformer Inference [17.38462391595219]
Transformer models have revolutionized AI, enabling applications like content generation and sentiment analysis.
However, their use in Machine Learning as a Service (ML) raises significant privacy concerns.
Private Transformer Inference (PTI) addresses these issues using cryptographic techniques.
arXiv Detail & Related papers (2024-12-11T07:05:24Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - The current state of security -- Insights from the German software industry [0.0]
This paper outlines the main ideas of secure software development that have been discussed in the literature.
A dataset on implementation in practice is gathered through a qualitative interview research involving 20 companies.
arXiv Detail & Related papers (2024-02-13T13:05:10Z) - Finding Software Vulnerabilities in Open-Source C Projects via Bounded
Model Checking [2.9129603096077332]
We advocate that bounded model-checking techniques can efficiently detect vulnerabilities in general software systems.
We have developed and evaluated a methodology to verify large software systems using a state-of-the-art bounded model checker.
arXiv Detail & Related papers (2023-11-09T11:25:24Z) - Detecting Misuse of Security APIs: A Systematic Review [5.329280109719902]
Security Application Programming Interfaces (APIs) are crucial for ensuring software security.
Their misuse introduces vulnerabilities, potentially leading to severe data breaches and substantial financial loss.
This study rigorously reviews the literature on detecting misuse of security APIs to gain a comprehensive understanding of this critical domain.
arXiv Detail & Related papers (2023-06-15T05:53:23Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.