Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead
- URL: http://arxiv.org/abs/2101.02559v1
- Date: Mon, 4 Jan 2021 20:06:56 GMT
- Title: Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead
- Authors: Muhammad Shafique, Mahum Naseer, Theocharis Theocharides, Christos
Kyrkou, Onur Mutlu, Lois Orosa, Jungwook Choi
- Abstract summary: Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
- Score: 24.60052335548398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) techniques have been rapidly adopted by smart
Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) due to their powerful
decision-making capabilities. However, they are vulnerable to various security
and reliability threats, at both hardware and software levels, that compromise
their accuracy. These threats get aggravated in emerging edge ML devices that
have stringent constraints in terms of resources (e.g., compute, memory,
power/energy), and that therefore cannot employ costly security and reliability
measures. Security, reliability, and vulnerability mitigation techniques span
from network security measures to hardware protection, with an increased
interest towards formal verification of trained ML models.
This paper summarizes the prominent vulnerabilities of modern ML systems,
highlights successful defenses and mitigation techniques against these
vulnerabilities, both at the cloud (i.e., during the ML training phase) and
edge (i.e., during the ML inference stage), discusses the implications of a
resource-constrained design on the reliability and security of the system,
identifies verification methodologies to ensure correct system behavior, and
describes open research challenges for building secure and reliable ML systems
at both the edge and the cloud.
Related papers
- Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - A Review of Machine Learning-based Security in Cloud Computing [5.384804060261833]
Cloud Computing (CC) is revolutionizing the way IT resources are delivered to users, allowing them to access and manage their systems with increased cost-effectiveness and simplified infrastructure.
With the growth of CC comes a host of security risks, including threats to availability, integrity, and confidentiality.
Machine Learning (ML) is increasingly being used by Cloud Service Providers (CSPs) to reduce the need for human intervention in identifying and resolving security issues.
arXiv Detail & Related papers (2023-09-10T01:52:23Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Machine Learning with Confidential Computing: A Systematization of Knowledge [9.632031075287047]
Privacy and security challenges in Machine Learning (ML) have become increasingly severe, along with ML's pervasive development and the recent demonstration of large attack surfaces.
As a mature system-oriented approach, Confidential Computing has been utilized in both academia and industry to mitigate privacy and security issues in various ML scenarios.
We systematize the prior work on Confidential Computing-assisted ML techniques that provide i) confidentiality guarantees and ii) integrity assurances, and discuss their advanced features and drawbacks.
arXiv Detail & Related papers (2022-08-22T08:23:53Z) - Special Session: Towards an Agile Design Methodology for Efficient,
Reliable, and Secure ML Systems [12.53463551929214]
Modern Machine Learning systems are expected to be highly reliable against hardware failures as well as secure against adversarial and IP stealing attacks.
Privacy concerns are also becoming a first-order issue.
This article summarizes the main challenges in agile development of efficient, reliable and secure ML systems.
arXiv Detail & Related papers (2022-04-18T17:29:46Z) - Confidential Machine Learning Computation in Untrusted Environments: A
Systems Security Perspective [1.9116784879310027]
This paper conducts a systematic and comprehensive survey by classifying attack vectors and mitigation in TEE-protected confidential ML in the untrusted environment.
It analyzes the multi-party ML security requirements, and discusses related engineering challenges.
arXiv Detail & Related papers (2021-11-05T07:56:25Z) - Unsolved Problems in ML Safety [45.82027272958549]
We present four problems ready for research, namely withstanding hazards, identifying hazards, steering ML systems, and reducing risks to how ML systems are handled.
We clarify each problem's motivation and provide concrete research directions.
arXiv Detail & Related papers (2021-09-28T17:59:36Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.