Usable Security for ML Systems in Mental Health: A Framework
- URL: http://arxiv.org/abs/2008.07738v1
- Date: Tue, 18 Aug 2020 04:44:47 GMT
- Title: Usable Security for ML Systems in Mental Health: A Framework
- Authors: Helen Jiang, Erwen Senge
- Abstract summary: This paper introduces a framework to guide and evaluate security-related designs, implementations, and deployments of Machine learning systems in mental health.
We propose new principles and requirements to make security mechanisms usable for end-users of those ML systems in mental health.
We present several concrete scenarios where different usable security cases and profiles in ML-systems in mental health applications are examined and evaluated.
- Score: 2.436681150766912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the applications and demands of Machine learning (ML) systems in mental
health are growing, there is little discussion nor consensus regarding a
uniquely challenging aspect: building security methods and requirements into
these ML systems, and keep the ML system usable for end-users. This question of
usable security is very important, because the lack of consideration in either
security or usability would hinder large-scale user adoption and active usage
of ML systems in mental health applications.
In this short paper, we introduce a framework of four pillars, and a set of
desired properties which can be used to systematically guide and evaluate
security-related designs, implementations, and deployments of ML systems for
mental health. We aim to weave together threads from different domains,
incorporate existing views, and propose new principles and requirements, in an
effort to lay out a clear framework where criteria and expectations are
established, and are used to make security mechanisms usable for end-users of
those ML systems in mental health. Together with this framework, we present
several concrete scenarios where different usable security cases and profiles
in ML-systems in mental health applications are examined and evaluated.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models [107.82336341926134]
SALAD-Bench is a safety benchmark specifically designed for evaluating Large Language Models (LLMs)
It transcends conventional benchmarks through its large scale, rich diversity, intricate taxonomy spanning three levels, and versatile functionalities.
arXiv Detail & Related papers (2024-02-07T17:33:54Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - Review of the AMLAS Methodology for Application in Healthcare [2.6072209210124675]
There is a need to proactively assure the safety of ML to prevent patient safety being compromised.
The Assurance of Machine Learning for use in Autonomous Systems methodology was developed by the Assuring Autonomy International Programme.
This review has appraised the methodology by consulting ML manufacturers to understand if it converges or diverges from their current safety assurance practices.
arXiv Detail & Related papers (2022-09-01T13:00:36Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Guidance on the Assurance of Machine Learning in Autonomous Systems
(AMLAS) [16.579772998870233]
We introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS)
AMLAS comprises a set of safety case patterns and a process for integrating safety assurance into the development of ML components.
arXiv Detail & Related papers (2021-02-02T15:41:57Z) - Safety design concepts for statistical machine learning components
toward accordance with functional safety standards [0.38073142980732994]
In recent years, curial incidents and accidents have been reported due to misjudgment of statistical machine learning.
In this paper, we organize five kinds of technical safety concepts (TSCs) for components toward accordance with functional safety standards.
arXiv Detail & Related papers (2020-08-04T01:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.