Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges
- URL: http://arxiv.org/abs/2201.04736v2
- Date: Sun, 17 Dec 2023 23:17:11 GMT
- Title: Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges
- Authors: Huaming Chen, M. Ali Babar
- Abstract summary: How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid development of Machine Learning (ML) has demonstrated superior
performance in many areas, such as computer vision, video and speech
recognition. It has now been increasingly leveraged in software systems to
automate the core tasks. However, how to securely develop the machine
learning-based modern software systems (MLBSS) remains a big challenge, for
which the insufficient consideration will largely limit its application in
safety-critical domains. One concern is that the present MLBSS development
tends to be rush, and the latent vulnerabilities and privacy issues exposed to
external users and attackers will be largely neglected and hard to be
identified. Additionally, machine learning-based software systems exhibit
different liabilities towards novel vulnerabilities at different development
stages from requirement analysis to system maintenance, due to its inherent
limitations from the model and data and the external adversary capabilities.
The successful generation of such intelligent systems will thus solicit
dedicated efforts jointly from different research areas, i.e., software
engineering, system security and machine learning. Most of the recent works
regarding the security issues for ML have a strong focus on the data and
models, which has brought adversarial attacks into consideration. In this work,
we consider that security for machine learning-based software systems may arise
from inherent system defects or external adversarial attacks, and the secure
development practices should be taken throughout the whole lifecycle. While
machine learning has become a new threat domain for existing software
engineering practices, there is no such review work covering the topic.
Overall, we present a holistic review regarding the security for MLBSS, which
covers a systematic understanding from a structure review of three distinct
aspects in terms of security threats...
Related papers
- In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.
This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - Secure Software Development: Issues and Challenges [0.0]
The digitization of our lives proves to solve our human problems as well as improve quality of life.
Hackers aim to steal the data of innocent people to use it for other causes such as identity fraud, scams and many more.
The goal of a secured system software is to prevent such exploitations from ever happening by conducting a system life cycle.
arXiv Detail & Related papers (2023-11-18T09:44:48Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Security and Machine Learning in the Real World [33.40597438876848]
We build on our experience evaluating the security of a machine learning software product deployed on a large scale to broaden the conversation to include a systems security view of vulnerabilities.
We propose a list of short-term mitigation suggestions that practitioners deploying machine learning modules can use to secure their systems.
arXiv Detail & Related papers (2020-07-13T16:57:12Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.