Beyond the ML Model: Applying Safety Engineering Frameworks to
Text-to-Image Development
- URL: http://arxiv.org/abs/2307.10312v1
- Date: Wed, 19 Jul 2023 02:46:20 GMT
- Title: Beyond the ML Model: Applying Safety Engineering Frameworks to
Text-to-Image Development
- Authors: Shalaleh Rismani, Renee Shelby, Andrew Smart, Renelito Delos Santos,
AJung Moon, Negar Rostamzadeh
- Abstract summary: We apply two well-established safety engineering frameworks (FMEA,A) to a case study involving text-to-image models.
Results of our analysis demonstrate the safety frameworks can uncover failure and hazards that pose social and ethical risks.
- Score: 8.912560990925993
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying potential social and ethical risks in emerging machine learning
(ML) models and their applications remains challenging. In this work, we
applied two well-established safety engineering frameworks (FMEA, STPA) to a
case study involving text-to-image models at three stages of the ML product
development pipeline: data processing, integration of a T2I model with other
models, and use. Results of our analysis demonstrate the safety frameworks -
both of which are not designed explicitly examine social and ethical risks -
can uncover failure and hazards that pose social and ethical risks. We
discovered a broad range of failures and hazards (i.e., functional, social, and
ethical) by analyzing interactions (i.e., between different ML models in the
product, between the ML product and user, and between development teams) and
processes (i.e., preparation of training data or workflows for using an ML
service/product). Our findings underscore the value and importance of examining
beyond an ML model in examining social and ethical risks, especially when we
have minimal information about an ML model.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Safety in Graph Machine Learning: Threats and Safeguards [84.26643884225834]
Despite their societal benefits, recent research highlights significant safety concerns associated with the widespread use of Graph ML models.
Lacking safety-focused designs, these models can produce unreliable predictions, demonstrate poor generalizability, and compromise data confidentiality.
In high-stakes scenarios such as financial fraud detection, these vulnerabilities could jeopardize both individuals and society at large.
arXiv Detail & Related papers (2024-05-17T18:11:11Z) - ML-On-Rails: Safeguarding Machine Learning Models in Software Systems A
Case Study [4.087995998278127]
We introduce ML-On-Rails, a protocol designed to safeguard machine learning models.
ML-On-Rails establishes a well-defined endpoint interface for different ML tasks, and clear communication between ML providers and ML consumers.
We evaluate the protocol through a real-world case study of the MoveReminder application.
arXiv Detail & Related papers (2024-01-12T11:27:15Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - From plane crashes to algorithmic harm: applicability of safety
engineering frameworks for responsible ML [8.411124873373172]
Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact for users, society and the environment.
Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent.
arXiv Detail & Related papers (2022-10-06T00:09:06Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Learning by Design: Structuring and Documenting the Human Choices in
Machine Learning Development [6.903929927172917]
We present a method consisting of eight design questions that outline the deliberation and normative choices going into creating a machine learning model.
Our method affords several benefits, such as supporting critical assessment through methodological transparency.
We believe that our method can help ML practitioners structure and justify their choices and assumptions when developing ML models.
arXiv Detail & Related papers (2021-05-03T08:47:45Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.