AI Maintenance: A Robustness Perspective
- URL: http://arxiv.org/abs/2301.03052v1
- Date: Sun, 8 Jan 2023 15:02:38 GMT
- Title: AI Maintenance: A Robustness Perspective
- Authors: Pin-Yu Chen and Payel Das
- Abstract summary: We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
- Score: 91.28724422822003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advancements in machine learning (ML) methods and compute resources,
artificial intelligence (AI) empowered systems are becoming a prevailing
technology. However, current AI technology such as deep learning is not
flawless. The significantly increased model complexity and data scale incur
intensified challenges when lacking trustworthiness and transparency, which
could create new risks and negative impacts. In this paper, we carve out AI
maintenance from the robustness perspective. We start by introducing some
highlighted robustness challenges in the AI lifecycle and motivating AI
maintenance by making analogies to car maintenance. We then propose an AI model
inspection framework to detect and mitigate robustness risks. We also draw
inspiration from vehicle autonomy to define the levels of AI robustness
automation. Our proposal for AI maintenance facilitates robustness assessment,
status tracking, risk scanning, model hardening, and regulation throughout the
AI lifecycle, which is an essential milestone toward building sustainable and
trustworthy AI ecosystems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline [0.0]
This paper explores the potential of integrating Artificial Intelligence (AI) into Cyber Threat Intelligence (CTI)
We provide a blueprint of an AI-enhanced CTI processing pipeline, and detail its components and functionalities.
We discuss ethical dilemmas, potential biases, and the imperative for transparency in AI-driven decisions.
arXiv Detail & Related papers (2024-03-05T19:03:56Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control [0.0]
The extent and scope of future AI capabilities remain a key uncertainty.
There are concerns over the extent of integration and oversight of AI opaque decision processes.
This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis.
arXiv Detail & Related papers (2022-11-06T15:46:02Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Quality Management of Machine Learning Systems [0.0]
Artificial Intelligence (AI) has become a part of our daily lives due to major advances in Machine Learning (ML) techniques.
For business/mission-critical systems, serious concerns about reliability and maintainability of AI applications remain.
This paper presents a view of a holistic quality management framework for ML applications based on the current advances.
arXiv Detail & Related papers (2020-06-16T21:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.