Adversarial Machine Learning -- Industry Perspectives
- URL: http://arxiv.org/abs/2002.05646v3
- Date: Fri, 19 Mar 2021 16:02:28 GMT
- Title: Adversarial Machine Learning -- Industry Perspectives
- Authors: Ram Shankar Siva Kumar, Magnus Nystr\"om, John Lambert, Andrew
Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, Sharon Xia
- Abstract summary: Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems.
We leverage the insights from the interviews and enumerate the gaps in perspective in securing machine learning systems when viewed in the context of traditional software security development.
- Score: 6.645585967550316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Based on interviews with 28 organizations, we found that industry
practitioners are not equipped with tactical and strategic tools to protect,
detect and respond to attacks on their Machine Learning (ML) systems. We
leverage the insights from the interviews and we enumerate the gaps in
perspective in securing machine learning systems when viewed in the context of
traditional software security development. We write this paper from the
perspective of two personas: developers/ML engineers and security incident
responders who are tasked with securing ML systems as they are designed,
developed and deployed ML systems. The goal of this paper is to engage
researchers to revise and amend the Security Development Lifecycle for
industrial-grade software in the adversarial ML era.
Related papers
- Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Confidential Machine Learning Computation in Untrusted Environments: A
Systems Security Perspective [1.9116784879310027]
This paper conducts a systematic and comprehensive survey by classifying attack vectors and mitigation in TEE-protected confidential ML in the untrusted environment.
It analyzes the multi-party ML security requirements, and discusses related engineering challenges.
arXiv Detail & Related papers (2021-11-05T07:56:25Z) - A Framework for Evaluating the Cybersecurity Risk of Real World, Machine
Learning Production Systems [41.470634460215564]
We develop an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems.
Using the proposed extension, security practitioners can apply attack graph analysis methods in environments that include ML components.
arXiv Detail & Related papers (2021-07-05T05:58:11Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Security and Machine Learning in the Real World [33.40597438876848]
We build on our experience evaluating the security of a machine learning software product deployed on a large scale to broaden the conversation to include a systems security view of vulnerabilities.
We propose a list of short-term mitigation suggestions that practitioners deploying machine learning modules can use to secure their systems.
arXiv Detail & Related papers (2020-07-13T16:57:12Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.