From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
- URL: http://arxiv.org/abs/2403.10776v1
- Date: Sat, 16 Mar 2024 02:29:42 GMT
- Title: From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
- Authors: Sanjana Gautam, Pranav Narayanan Venkit, Sourojit Ghosh,
- Abstract summary: Concerns persist regarding discriminatory tendencies within advanced generative models such as Gemini and GPT.
Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AI context.
- Score: 3.167924351428519
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS). Despite their versatility across diverse sectors, concerns persist regarding discriminatory tendencies within these models, particularly favoring selected `majority' demographics across various sociodemographic dimensions. Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AIaaS context. In this work, we provide a critical summary of the state of research in the context of social harms to lead the conversation to focus on their implications. We also present open-ended research questions, guided by our discussion, to help define future research pathways.
Related papers
- Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations [15.381034360289899]
Societal stereotypes are at the center of a myriad of responsible AI interventions.
We propose a unified framework to operationalize stereotypes in generative AI evaluations.
arXiv Detail & Related papers (2025-01-03T19:39:48Z) - Authenticity and exclusion: social media algorithms and the dynamics of belonging in epistemic communities [0.8287206589886879]
This paper examines how social media platforms and their recommendation algorithms shape the professional visibility and opportunities of researchers from minority groups.
Using agent-based simulations, we uncover three key patterns: First, these algorithms disproportionately harm the professional visibility of researchers from minority groups.
Second, within these minority groups, the algorithms result in greater visibility for users who more closely resemble the majority group, incentivizing assimilation at the cost of professional invisibility.
arXiv Detail & Related papers (2024-07-11T14:36:58Z) - The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities.
An increasing number of AI tools are applied to decision-making processes in various sectors such as health, energy, and housing.
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - Language Agents for Detecting Implicit Stereotypes in Text-to-image
Models at Scale [45.64096601242646]
We introduce a novel agent architecture tailored for stereotype detection in text-to-image models.
We build the stereotype-relevant benchmark based on multiple open-text datasets.
We find that these models often display serious stereotypes when it comes to certain prompts about personal characteristics.
arXiv Detail & Related papers (2023-10-18T08:16:29Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics [24.86176236641865]
We present the first survey in Explainable AI that focuses on the methods and metrics for interpreting deep visual models.
Covering the landmark contributions along the state-of-the-art, we not only provide a taxonomic organization of the existing techniques, but also excavate a range of evaluation metrics.
arXiv Detail & Related papers (2023-01-31T06:49:42Z) - Didn't see that coming: a survey on non-verbal social human behavior
forecasting [47.99589136455976]
Non-verbal social human behavior forecasting has increasingly attracted the interest of the research community in recent years.
Its direct applications to human-robot interaction and socially-aware human motion generation make it a very attractive field.
We define the behavior forecasting problem for multiple interactive agents in a generic way that aims at unifying the fields of social signals prediction and human motion forecasting.
arXiv Detail & Related papers (2022-03-04T18:25:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.