On managing vulnerabilities in AI/ML systems
- URL: http://arxiv.org/abs/2101.10865v1
- Date: Fri, 22 Jan 2021 21:59:44 GMT
- Title: On managing vulnerabilities in AI/ML systems
- Authors: Jonathan M. Spring and April Galyardt and Allen D. Householder and
Nathan VanHoudnos
- Abstract summary: This paper explores how the current paradigm of vulnerability management might adapt to include machine learning systems.
The hypothetical scenario is structured around exploring the changes to the six areas of vulnerability management: discovery, report intake, analysis, coordination, disclosure, and response.
- Score: 1.6058099298620423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores how the current paradigm of vulnerability management
might adapt to include machine learning systems through a thought experiment:
what if flaws in machine learning (ML) were assigned Common Vulnerabilities and
Exposures (CVE) identifiers (CVE-IDs)? We consider both ML algorithms and model
objects. The hypothetical scenario is structured around exploring the changes
to the six areas of vulnerability management: discovery, report intake,
analysis, coordination, disclosure, and response. While algorithm flaws are
well-known in the academic research community, there is no apparent clear line
of communication between this research community and the operational
communities that deploy and manage systems that use ML. The thought experiments
identify some ways in which CVE-IDs may establish some useful lines of
communication between these two communities. In particular, it would start to
introduce the research community to operational security concepts, which
appears to be a gap left by existing efforts.
Related papers
- How Do Communities of ML-Enabled Systems Smell? A Cross-Sectional Study on the Prevalence of Community Smells [13.840177755312665]
We conducted an empirical study on 188 repositories from the NICHE dataset using the CADOCS tool to identify and analyze community smells.
We found that certain smells, such as Prima Donna Effects and Sharing Villainy, are more prevalent and fluctuate over time compared to others like Radio Silence or Organizational Skirmish.
arXiv Detail & Related papers (2025-04-24T10:23:37Z) - Boost, Disentangle, and Customize: A Robust System2-to-System1 Pipeline for Code Generation [58.799397354312596]
Large language models (LLMs) have demonstrated remarkable capabilities in various domains, particularly in system 1 tasks.
Recent research on System2-to-System1 methods surge, exploring the System 2 reasoning knowledge via inference-time computation.
In this paper, we focus on code generation, which is a representative System 2 task, and identify two primary challenges.
arXiv Detail & Related papers (2025-02-18T03:20:50Z) - LLMs in Software Security: A Survey of Vulnerability Detection Techniques and Insights [12.424610893030353]
Large Language Models (LLMs) are emerging as transformative tools for software vulnerability detection.
This paper provides a detailed survey of LLMs in vulnerability detection.
We address challenges such as cross-language vulnerability detection, multimodal data integration, and repository-level analysis.
arXiv Detail & Related papers (2025-02-10T21:33:38Z) - Survey on AI-Generated Media Detection: From Non-MLLM to MLLM [51.91311158085973]
Methods for detecting AI-generated media have evolved rapidly.
General-purpose detectors based on MLLMs integrate authenticity verification, explainability, and localization capabilities.
Ethical and security considerations have emerged as critical global concerns.
arXiv Detail & Related papers (2025-02-07T12:18:20Z) - In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - Towards Effectively Detecting and Explaining Vulnerabilities Using Large Language Models [17.96542494363619]
Large language models (LLMs) have demonstrated remarkable capabilities in comprehending complex contexts.
In this paper, we conduct a study to investigate the capabilities of LLMs in both detecting and explaining vulnerabilities.
Under specialized fine-tuning for vulnerability explanation, our LLMVulExp not only detects the types of vulnerabilities in the code but also analyzes the code context to generate the cause, location, and repair suggestions.
arXiv Detail & Related papers (2024-06-14T04:01:25Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Zero-Knowledge Proof-based Verifiable Decentralized Machine Learning in Communication Network: A Comprehensive Survey [31.111210313340454]
Decentralized approaches to machine learning introduce challenges related to trust and verifiability.
We present a comprehensive review of Zero-Knowledge Proof-based Verifiable Machine Learning (ZKP-VML)
arXiv Detail & Related papers (2023-10-23T12:15:23Z) - A Survey on Automated Software Vulnerability Detection Using Machine
Learning and Deep Learning [19.163031235081565]
Machine Learning (ML) and Deep Learning (DL) based models for detecting vulnerabilities in source code have been presented in recent years.
It may be difficult to discover gaps in existing research and potential for future improvement without a comprehensive survey.
This work address that gap by presenting a systematic survey to characterize various features of ML/DL-based source code level software vulnerability detection approaches.
arXiv Detail & Related papers (2023-06-20T16:51:59Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Learned Systems Security [30.39158287782567]
A learned system uses machine learning (ML) internally to improve performance.
We can expect such systems to be vulnerable to some adversarial-ML attacks.
We develop a framework for identifying vulnerabilities that stem from the use of ML.
arXiv Detail & Related papers (2022-12-20T15:09:30Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.