Typology of Risks of Generative Text-to-Image Models
- URL: http://arxiv.org/abs/2307.05543v1
- Date: Sat, 8 Jul 2023 20:33:30 GMT
- Title: Typology of Risks of Generative Text-to-Image Models
- Authors: Charlotte Bird and Eddie L. Ungless and Atoosa Kasirzadeh
- Abstract summary: This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Midjourney.
Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed.
We identify 22 distinct risk types, spanning issues from data bias to malicious use.
- Score: 1.933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the direct risks and harms associated with modern
text-to-image generative models, such as DALL-E and Midjourney, through a
comprehensive literature review. While these models offer unprecedented
capabilities for generating images, their development and use introduce new
types of risk that require careful consideration. Our review reveals
significant knowledge gaps concerning the understanding and treatment of these
risks despite some already being addressed. We offer a taxonomy of risks across
six key stakeholder groups, inclusive of unexplored issues, and suggest future
research directions. We identify 22 distinct risk types, spanning issues from
data bias to malicious use. The investigation presented here is intended to
enhance the ongoing discourse on responsible model development and deployment.
By highlighting previously overlooked risks and gaps, it aims to shape
subsequent research and governance initiatives, guiding them toward the
responsible, secure, and ethically conscious evolution of text-to-image models.
Related papers
- Automatic Extraction of Disease Risk Factors from Medical Publications [1.321009936753118]
We present a novel approach to automating the identification of risk factors for diseases from medical literature.
We first identify relevant articles, then classify them based on the presence of risk factor discussions, and finally extract specific risk factor information for a disease.
Our contributions include the development of a comprehensive pipeline for the automated extraction of risk factors and the compilation of several datasets.
arXiv Detail & Related papers (2024-07-10T05:17:55Z) - Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights [50.89022445197919]
We propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity)
Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk.
arXiv Detail & Related papers (2024-06-25T10:08:45Z) - Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models [58.74606272936636]
Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts.
The models could be exploited for malicious purposes, such as generating images with violence or nudity, or creating unauthorized portraits of public figures in inappropriate contexts.
concept removal methods have been proposed to modify diffusion models to prevent the generation of malicious and unwanted concepts.
arXiv Detail & Related papers (2024-06-21T03:58:44Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models [57.10361282229501]
We propose C-RAG, the first framework to certify generation risks for RAG models.
Specifically, we provide conformal risk analysis for RAG models and certify an upper confidence bound of generation risks.
We prove that RAG achieves a lower conformal generation risk than that of a single LLM when the quality of the retrieval model and transformer is non-trivial.
arXiv Detail & Related papers (2024-02-05T16:46:16Z) - The Reasoning Under Uncertainty Trap: A Structural AI Risk [0.0]
Report provides an exposition of what makes RUU so challenging for both humans and machines.
We detail how this misuse risk connects to a wider network of underlying structural risks.
arXiv Detail & Related papers (2024-01-29T17:16:57Z) - Language Generation Models Can Cause Harm: So What Can We Do About It?
An Actionable Survey [50.58063811745676]
This work provides a survey of practical methods for addressing potential threats and societal harms from language generation models.
We draw on several prior works' of language model risks to present a structured overview of strategies for detecting and ameliorating different kinds of risks/harms of language generators.
arXiv Detail & Related papers (2022-10-14T10:43:39Z) - Membership Inference Attacks Against Text-to-image Generation Models [23.39695974954703]
This paper performs the first privacy analysis of text-to-image generation models through the lens of membership inference.
We propose three key intuitions about membership information and design four attack methodologies accordingly.
All of the proposed attacks can achieve significant performance, in some cases even close to an accuracy of 1, and thus the corresponding risk is much more severe than that shown by existing membership inference attacks.
arXiv Detail & Related papers (2022-10-03T14:31:39Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z) - Two steps to risk sensitivity [4.974890682815778]
conditional value-at-risk (CVaR) is a risk measure for modeling human and animal planning.
We adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers.
We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR.
arXiv Detail & Related papers (2021-11-12T16:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.