From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
- URL: http://arxiv.org/abs/2403.10776v1
- Date: Sat, 16 Mar 2024 02:29:42 GMT
- Title: From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
- Authors: Sanjana Gautam, Pranav Narayanan Venkit, Sourojit Ghosh,
- Abstract summary: Concerns persist regarding discriminatory tendencies within advanced generative models such as Gemini and GPT.
Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AI context.
- Score: 3.167924351428519
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS). Despite their versatility across diverse sectors, concerns persist regarding discriminatory tendencies within these models, particularly favoring selected `majority' demographics across various sociodemographic dimensions. Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AIaaS context. In this work, we provide a critical summary of the state of research in the context of social harms to lead the conversation to focus on their implications. We also present open-ended research questions, guided by our discussion, to help define future research pathways.
Related papers
- Authenticity and exclusion: social media algorithms and the dynamics of belonging in epistemic communities [0.8287206589886879]
This paper examines how social media platforms and their recommendation algorithms shape the professional visibility and opportunities of researchers from minority groups.
Using agent-based simulations, we uncover three key patterns: First, these algorithms disproportionately harm the professional visibility of researchers from minority groups.
Second, within these minority groups, the algorithms result in greater visibility for users who more closely resemble the majority group, incentivizing assimilation at the cost of professional invisibility.
arXiv Detail & Related papers (2024-07-11T14:36:58Z) - The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - Language Agents for Detecting Implicit Stereotypes in Text-to-image
Models at Scale [45.64096601242646]
We introduce a novel agent architecture tailored for stereotype detection in text-to-image models.
We build the stereotype-relevant benchmark based on multiple open-text datasets.
We find that these models often display serious stereotypes when it comes to certain prompts about personal characteristics.
arXiv Detail & Related papers (2023-10-18T08:16:29Z) - Evaluating Machine Perception of Indigeneity: An Analysis of ChatGPT's
Perceptions of Indigenous Roles in Diverse Scenarios [0.0]
This work offers a unique perspective on how technology perceives and potentially amplifies societal biases related to indigeneity in social computing.
The findings offer insights into the broader implications of indigeneity in critical computing.
arXiv Detail & Related papers (2023-10-13T16:46:23Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics [24.86176236641865]
We present the first survey in Explainable AI that focuses on the methods and metrics for interpreting deep visual models.
Covering the landmark contributions along the state-of-the-art, we not only provide a taxonomic organization of the existing techniques, but also excavate a range of evaluation metrics.
arXiv Detail & Related papers (2023-01-31T06:49:42Z) - Didn't see that coming: a survey on non-verbal social human behavior
forecasting [47.99589136455976]
Non-verbal social human behavior forecasting has increasingly attracted the interest of the research community in recent years.
Its direct applications to human-robot interaction and socially-aware human motion generation make it a very attractive field.
We define the behavior forecasting problem for multiple interactive agents in a generic way that aims at unifying the fields of social signals prediction and human motion forecasting.
arXiv Detail & Related papers (2022-03-04T18:25:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.