Special Session: Towards an Agile Design Methodology for Efficient,
Reliable, and Secure ML Systems
- URL: http://arxiv.org/abs/2204.09514v1
- Date: Mon, 18 Apr 2022 17:29:46 GMT
- Title: Special Session: Towards an Agile Design Methodology for Efficient,
Reliable, and Secure ML Systems
- Authors: Shail Dave, Alberto Marchisio, Muhammad Abdullah Hanif, Amira Guesmi,
Aviral Shrivastava, Ihsen Alouani, Muhammad Shafique
- Abstract summary: Modern Machine Learning systems are expected to be highly reliable against hardware failures as well as secure against adversarial and IP stealing attacks.
Privacy concerns are also becoming a first-order issue.
This article summarizes the main challenges in agile development of efficient, reliable and secure ML systems.
- Score: 12.53463551929214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The real-world use cases of Machine Learning (ML) have exploded over the past
few years. However, the current computing infrastructure is insufficient to
support all real-world applications and scenarios. Apart from high efficiency
requirements, modern ML systems are expected to be highly reliable against
hardware failures as well as secure against adversarial and IP stealing
attacks. Privacy concerns are also becoming a first-order issue. This article
summarizes the main challenges in agile development of efficient, reliable and
secure ML systems, and then presents an outline of an agile design methodology
to generate efficient, reliable and secure ML systems based on user-defined
constraints and objectives.
Related papers
- Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems [27.316115171846953]
Large Language Models (LLMs) have shown significant promise in real-world decision-making tasks for embodied AI.
LLMs are fine-tuned to leverage their inherent common sense and reasoning abilities while being tailored to specific applications.
This fine-tuning process introduces considerable safety and security vulnerabilities, especially in safety-critical cyber-physical systems.
arXiv Detail & Related papers (2024-05-27T17:59:43Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Machine Learning with Confidential Computing: A Systematization of Knowledge [9.632031075287047]
Privacy and security challenges in Machine Learning (ML) have become increasingly severe, along with ML's pervasive development and the recent demonstration of large attack surfaces.
As a mature system-oriented approach, Confidential Computing has been utilized in both academia and industry to mitigate privacy and security issues in various ML scenarios.
We systematize the prior work on Confidential Computing-assisted ML techniques that provide i) confidentiality guarantees and ii) integrity assurances, and discuss their advanced features and drawbacks.
arXiv Detail & Related papers (2022-08-22T08:23:53Z) - Confidential Machine Learning Computation in Untrusted Environments: A
Systems Security Perspective [1.9116784879310027]
This paper conducts a systematic and comprehensive survey by classifying attack vectors and mitigation in TEE-protected confidential ML in the untrusted environment.
It analyzes the multi-party ML security requirements, and discusses related engineering challenges.
arXiv Detail & Related papers (2021-11-05T07:56:25Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead [24.60052335548398]
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
arXiv Detail & Related papers (2021-01-04T20:06:56Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.