The Risks of Machine Learning Systems
- URL: http://arxiv.org/abs/2204.09852v1
- Date: Thu, 21 Apr 2022 02:42:10 GMT
- Title: The Risks of Machine Learning Systems
- Authors: Samson Tan, Araz Taeihagh, Kathy Baxter
- Abstract summary: A system's overall risk is influenced by its direct and indirect effects.
Existing frameworks for ML risk/impact assessment often address an abstract notion of risk or do not concretize this dependence.
First-order risks stem from aspects of the ML system, while second-order risks stem from the consequences of first-order risks.
- Score: 11.105884571838818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The speed and scale at which machine learning (ML) systems are deployed are
accelerating even as an increasing number of studies highlight their potential
for negative impact. There is a clear need for companies and regulators to
manage the risk from proposed ML systems before they harm people. To achieve
this, private and public sector actors first need to identify the risks posed
by a proposed ML system. A system's overall risk is influenced by its direct
and indirect effects. However, existing frameworks for ML risk/impact
assessment often address an abstract notion of risk or do not concretize this
dependence.
We propose to address this gap with a context-sensitive framework for
identifying ML system risks comprising two components: a taxonomy of the first-
and second-order risks posed by ML systems, and their contributing factors.
First-order risks stem from aspects of the ML system, while second-order risks
stem from the consequences of first-order risks. These consequences are system
failures that result from design and development choices. We explore how
different risks may manifest in various types of ML systems, the factors that
affect each risk, and how first-order risks may lead to second-order effects
when the system interacts with the real world.
Throughout the paper, we show how real events and prior research fit into our
Machine Learning System Risk framework (MLSR). MLSR operates on ML systems
rather than technologies or domains, recognizing that a system's design,
implementation, and use case all contribute to its risk. In doing so, it
unifies the risks that are commonly discussed in the ethical AI community
(e.g., ethical/human rights risks) with system-level risks (e.g., application,
design, control risks), paving the way for holistic risk assessments of ML
systems.
Related papers
- Quantifying Risk Propensities of Large Language Models: Ethical Focus and Bias Detection through Role-Play [0.43512163406552007]
As Large Language Models (LLMs) become more prevalent, concerns about their safety, ethics, and potential biases have risen.
This study innovatively applies the Domain-Specific Risk-Taking (DOSPERT) scale from cognitive science to LLMs.
We propose a novel Ethical Decision-Making Risk Attitude Scale (EDRAS) to assess LLMs' ethical risk attitudes in depth.
arXiv Detail & Related papers (2024-10-26T15:55:21Z) - SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - System Safety Engineering for Social and Ethical ML Risks: A Case Study [0.5249805590164902]
Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems.
Existing approaches are largely disjointed, ad-hoc and of unknown effectiveness.
We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.
arXiv Detail & Related papers (2022-11-08T22:58:58Z) - SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for
Autonomous Driving [16.78084912175149]
This paper proposes the "Self-Surveillance and Self-Adaption System" as a systematic approach to online minimize the SOTIF risk.
The core of this system is the risk monitoring of the implemented artificial intelligence algorithms within the autonomous vehicles.
The inherent perception algorithm risk and external collision risk are jointly quantified via SOTIF entropy.
arXiv Detail & Related papers (2022-11-08T05:02:12Z) - From plane crashes to algorithmic harm: applicability of safety
engineering frameworks for responsible ML [8.411124873373172]
Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact for users, society and the environment.
Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent.
arXiv Detail & Related papers (2022-10-06T00:09:06Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.