Security and Machine Learning in the Real World
- URL: http://arxiv.org/abs/2007.07205v1
- Date: Mon, 13 Jul 2020 16:57:12 GMT
- Title: Security and Machine Learning in the Real World
- Authors: Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno,
Jerry Li
- Abstract summary: We build on our experience evaluating the security of a machine learning software product deployed on a large scale to broaden the conversation to include a systems security view of vulnerabilities.
We propose a list of short-term mitigation suggestions that practitioners deploying machine learning modules can use to secure their systems.
- Score: 33.40597438876848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models deployed in many safety- and business-critical
systems are vulnerable to exploitation through adversarial examples. A large
body of academic research has thoroughly explored the causes of these blind
spots, developed sophisticated algorithms for finding them, and proposed a few
promising defenses. A vast majority of these works, however, study standalone
neural network models. In this work, we build on our experience evaluating the
security of a machine learning software product deployed on a large scale to
broaden the conversation to include a systems security view of these
vulnerabilities. We describe novel challenges to implementing systems security
best practices in software with ML components. In addition, we propose a list
of short-term mitigation suggestions that practitioners deploying machine
learning modules can use to secure their systems. Finally, we outline
directions for new research into machine learning attacks and defenses that can
serve to advance the state of ML systems security.
Related papers
- Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead [24.60052335548398]
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
arXiv Detail & Related papers (2021-01-04T20:06:56Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Adversarial Machine Learning -- Industry Perspectives [6.645585967550316]
Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems.
We leverage the insights from the interviews and enumerate the gaps in perspective in securing machine learning systems when viewed in the context of traditional software security development.
arXiv Detail & Related papers (2020-02-04T02:28:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.