Look Before You Leap! Designing a Human-Centered AI System for Change
Risk Assessment
- URL: http://arxiv.org/abs/2108.07951v1
- Date: Wed, 18 Aug 2021 02:41:48 GMT
- Title: Look Before You Leap! Designing a Human-Centered AI System for Change
Risk Assessment
- Authors: Binay Gupta, Anirban Chatterjee, Harika Matha, Kunal Banerjee,
Lalitdutt Parsai, Vijay Agneeswaran
- Abstract summary: Change management is a promising sub-field in operations that manages and reviews the changes to be deployed in production in a systematic manner.
It is practically impossible to manually review a large number of changes on a daily basis and assess the risk associated with them.
There are a few commercial solutions available to address this problem but those solutions lack the ability to incorporate domain knowledge and continuous feedback from domain experts into the risk assessment process.
- Score: 0.5741525024018875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reducing the number of failures in a production system is one of the most
challenging problems in technology driven industries, such as, the online
retail industry. To address this challenge, change management has emerged as a
promising sub-field in operations that manages and reviews the changes to be
deployed in production in a systematic manner. However, it is practically
impossible to manually review a large number of changes on a daily basis and
assess the risk associated with them. This warrants the development of an
automated system to assess the risk associated with a large number of changes.
There are a few commercial solutions available to address this problem but
those solutions lack the ability to incorporate domain knowledge and continuous
feedback from domain experts into the risk assessment process. As part of this
work, we aim to bridge the gap between model-driven risk assessment of change
requests and the assessment of domain experts by building a continuous feedback
loop into the risk assessment process. Here we present our work to build an
end-to-end machine learning system along with the discussion of some of
practical challenges we faced related to extreme skewness in class
distribution, concept drift, estimation of the uncertainty associated with the
model's prediction and the overall scalability of the system.
Related papers
- Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - The Invisible Game on the Internet: A Case Study of Decoding Deceptive Patterns [19.55209153462331]
Deceptive patterns are design practices embedded in digital platforms to manipulate users.
Despite advancements in detection tools, a significant gap exists in assessing deceptive pattern risks.
arXiv Detail & Related papers (2024-02-05T22:42:59Z) - Security Challenges in Autonomous Systems Design [1.864621482724548]
With the independence from human control, cybersecurity of such systems becomes even more critical.
With the independence from human control, cybersecurity of such systems becomes even more critical.
This paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions.
arXiv Detail & Related papers (2023-11-05T09:17:39Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Model evaluation for extreme risks [46.53170857607407]
Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills.
We explain why model evaluation is critical for addressing extreme risks.
arXiv Detail & Related papers (2023-05-24T16:38:43Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.