Proceedings of the Robust Artificial Intelligence System Assurance
(RAISA) Workshop 2022
- URL: http://arxiv.org/abs/2202.04787v1
- Date: Thu, 10 Feb 2022 01:15:50 GMT
- Title: Proceedings of the Robust Artificial Intelligence System Assurance
(RAISA) Workshop 2022
- Authors: Olivia Brown, Brad Dillman
- Abstract summary: The RAISA workshop will focus on research, development and application of robust artificial intelligence (AI) and machine learning (ML) systems.
Rather than studying robustness with respect to particular ML algorithms, our approach will be to explore robustness assurance at the system architecture level.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Robust Artificial Intelligence System Assurance (RAISA) workshop will
focus on research, development and application of robust artificial
intelligence (AI) and machine learning (ML) systems. Rather than studying
robustness with respect to particular ML algorithms, our approach will be to
explore robustness assurance at the system architecture level, during both
development and deployment, and within the human-machine teaming context. While
the research community is converging on robust solutions for individual AI
models in specific scenarios, the problem of evaluating and assuring the
robustness of an AI system across its entire life cycle is much more complex.
Moreover, the operational context in which AI systems are deployed necessitates
consideration of robustness and its relation to principles of fairness,
privacy, and explainability.
Related papers
- Collaborative AI in Sentiment Analysis: System Architecture, Data Prediction and Deployment Strategies [3.3374611485861116]
Large language model (LLM) based artificial intelligence technologies have been a game-changer, particularly in sentiment analysis.
However, integrating diverse AI models for processing complex multimodal data and the associated high costs of feature extraction presents significant challenges.
This study introduces a collaborative AI framework designed to efficiently distribute and resolve tasks across various AI systems.
arXiv Detail & Related papers (2024-10-17T06:14:34Z) - Operating System And Artificial Intelligence: A Systematic Review [17.256378758253437]
We explore how AI-driven tools enhance OS performance, security, and efficiency, while OS advancements facilitate more sophisticated AI applications.
We analyze various AI techniques employed to optimize OS functionalities, including memory management, process scheduling, and intrusion detection.
We explore the promising prospects of Intelligent OSes, considering not only how innovative OS architectures will pave the way for groundbreaking opportunities but also how AI will significantly contribute to advancing these next-generation OSs.
arXiv Detail & Related papers (2024-07-19T05:29:34Z) - Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory [0.0]
We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER)
SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation.
arXiv Detail & Related papers (2024-04-07T07:05:59Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Developing and Operating Artificial Intelligence Models in Trustworthy
Autonomous Systems [8.27310353898034]
This work-in-progress paper aims to close the gap between the development and operation of AI-based AS.
We propose a novel, holistic DevOps approach to put it into practice.
arXiv Detail & Related papers (2020-03-11T17:52:30Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.