From plane crashes to algorithmic harm: applicability of safety
engineering frameworks for responsible ML
- URL: http://arxiv.org/abs/2210.03535v1
- Date: Thu, 6 Oct 2022 00:09:06 GMT
- Title: From plane crashes to algorithmic harm: applicability of safety
engineering frameworks for responsible ML
- Authors: Shalaleh Rismani, Renee Shelby, Andrew Smart, Edgar Jatho, Joshua
Kroll, AJung Moon, Negar Rostamzadeh
- Abstract summary: Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact for users, society and the environment.
Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent.
- Score: 8.411124873373172
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inappropriate design and deployment of machine learning (ML) systems leads to
negative downstream social and ethical impact -- described here as social and
ethical risks -- for users, society and the environment. Despite the growing
need to regulate ML systems, current processes for assessing and mitigating
risks are disjointed and inconsistent. We interviewed 30 industry practitioners
on their current social and ethical risk management practices, and collected
their first reactions on adapting safety engineering frameworks into their
practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode
and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide
appropriate structure toward social and ethical risk assessment and mitigation
processes. However, we also find nontrivial challenges in integrating such
frameworks in the fast-paced culture of the ML industry. We call on the ML
research community to strengthen existing frameworks and assess their efficacy,
ensuring that ML systems are safer for all people.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Beyond the ML Model: Applying Safety Engineering Frameworks to
Text-to-Image Development [8.912560990925993]
We apply two well-established safety engineering frameworks (FMEA,A) to a case study involving text-to-image models.
Results of our analysis demonstrate the safety frameworks can uncover failure and hazards that pose social and ethical risks.
arXiv Detail & Related papers (2023-07-19T02:46:20Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - System Safety Engineering for Social and Ethical ML Risks: A Case Study [0.5249805590164902]
Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems.
Existing approaches are largely disjointed, ad-hoc and of unknown effectiveness.
We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.
arXiv Detail & Related papers (2022-11-08T22:58:58Z) - The Risks of Machine Learning Systems [11.105884571838818]
A system's overall risk is influenced by its direct and indirect effects.
Existing frameworks for ML risk/impact assessment often address an abstract notion of risk or do not concretize this dependence.
First-order risks stem from aspects of the ML system, while second-order risks stem from the consequences of first-order risks.
arXiv Detail & Related papers (2022-04-21T02:42:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.