Security policy audits: why and how
- URL: http://arxiv.org/abs/2207.11306v1
- Date: Fri, 22 Jul 2022 19:27:18 GMT
- Title: Security policy audits: why and how
- Authors: Arvind Narayanan, Kevin Lee
- Abstract summary: This experience paper describes a series of security policy audits.
It exposes policy flaws affecting billions of users that can be exploited by low-tech attackers.
The solutions, in turn, need to be policy-based.
- Score: 8.263685033627668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information security isn't just about software and hardware -- it's at least
as much about policies and processes. But the research community overwhelmingly
focuses on the former over the latter, while gaping policy and process problems
persist. In this experience paper, we describe a series of security policy
audits that we conducted, exposing policy flaws affecting billions of users
that can be -- and often are -- exploited by low-tech attackers who don't need
to use any tools or exploit software vulnerabilities. The solutions, in turn,
need to be policy-based. We advocate for the study of policies and processes,
point out its intellectual and practical challenges, lay out our theory of
change, and present a research agenda.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Security Debt in Practice: Nuanced Insights from Practitioners [0.3277163122167433]
Tight deadlines, limited resources, and prioritization of functionality over security can lead to insecure coding practices.<n>Despite their critical importance, there is limited empirical evidence on how software practitioners perceive, manage, and communicate Security Debts.<n>This study is based on semi-structured interviews with 22 software practitioners across various roles, organizations, and countries.
arXiv Detail & Related papers (2025-07-15T14:28:28Z) - Keep Security! Benchmarking Security Policy Preservation in Large Language Model Contexts Against Indirect Attacks in Question Answering [3.6152232645741025]
Large Language Models (LLMs) are increasingly deployed in sensitive domains such as enterprise and government.<n>We introduce a novel large-scale benchmark dataset, CoPriva, evaluating LLM adherence to contextual non-disclosure policies in question answering.<n>We evaluate 10 LLMs on our benchmark and reveal a significant vulnerability: many models violate user-defined policies and leak sensitive information.
arXiv Detail & Related papers (2025-05-21T17:58:11Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database [0.0]
The project is driven by the rise of generative AI in politics, ongoing policy efforts to address harms, and the need to connect AI incidents and political communication research.
The database contains political deepfake content, metadata, and researcher-coded descriptors drawn from political science, public policy, communication, and misinformation studies.
It aims to help reveal the prevalence, trends, and impact of political deepfakes, such as those featuring major political figures or events.
arXiv Detail & Related papers (2024-09-05T19:24:38Z) - From Chaos to Consistency: The Role of CSAF in Streamlining Security Advisories [4.850201420807801]
The Common Security Advisory Format (CSAF) aims to bring security advisories into a standardized format.
Our results show that CSAF is currently rarely used.
One of the main reasons is that systems are not yet designed for automation.
arXiv Detail & Related papers (2024-08-27T10:22:59Z) - From Guidelines to Governance: A Study of AI Policies in Education [1.9659095632676098]
This study employs a survey methodology to examine the policy landscape concerning emerging technologies.
The majority of institutions lack specialized guide-lines for the ethical deployment of AI tools such as ChatGPT.
High schools are less inclined to work on policies than higher educational institutions.
arXiv Detail & Related papers (2024-03-22T20:07:58Z) - The current state of security -- Insights from the German software industry [0.0]
This paper outlines the main ideas of secure software development that have been discussed in the literature.
A dataset on implementation in practice is gathered through a qualitative interview research involving 20 companies.
arXiv Detail & Related papers (2024-02-13T13:05:10Z) - Exploring Security Practices in Infrastructure as Code: An Empirical
Study [54.669404064111795]
Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools.
scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks.
Ensuring security relies on practitioners understanding and the adoption of explicit policies, guidelines, or best practices.
arXiv Detail & Related papers (2023-08-07T23:43:32Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - A System for Interactive Examination of Learned Security Policies [0.0]
We present a system for interactive examination of learned security policies.
It allows a user to traverse episodes of Markov decision processes in a controlled manner.
arXiv Detail & Related papers (2022-04-03T17:55:32Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Policy Evaluation Networks [50.53250641051648]
We introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding.
Our empirical results demonstrate that combining these three elements can produce policies that outperform those that generated the training data.
arXiv Detail & Related papers (2020-02-26T23:00:27Z) - Preventing Imitation Learning with Adversarial Policy Ensembles [79.81807680370677]
Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy.
How can we protect against external observers cloning our proprietary policies?
We introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies.
arXiv Detail & Related papers (2020-01-31T01:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.