Bias of AI-Generated Content: An Examination of News Produced by Large Language Models
- URL: http://arxiv.org/abs/2309.09825v3
- Date: Wed, 3 Apr 2024 19:47:54 GMT
- Title: Bias of AI-Generated Content: An Examination of News Produced by Large Language Models
- Authors: Xiao Fang, Shangkun Che, Minjia Mao, Hongzhe Zhang, Ming Zhao, Xiaohang Zhao,
- Abstract summary: Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC)
Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA.
- Score: 3.386586126505656
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - AI AI Bias: Large Language Models Favor Their Own Generated Content [0.1979158763744267]
We test whether large language models (LLMs) are biased towards text generated by LLMs over text authored by humans.
Our results show a consistent tendency for LLM-based AIs to prefer LLM-generated content.
This suggests the possibility of AI systems implicitly discriminating against humans, giving AI agents an unfair advantage.
arXiv Detail & Related papers (2024-07-09T13:15:14Z) - Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News [0.38233569758620056]
This paper aims to elucidate simple markers that help individuals distinguish between articles penned by humans and those created by LLMs.
We then devise a metric named Entropy-Shift Authorship Signature (ESAS) based on the information theory and entropy principles.
The proposed ESAS ranks terms or entities, like POS tagging, within news articles based on their relevance in discerning article authorship.
arXiv Detail & Related papers (2024-06-20T06:02:04Z) - White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs [58.27353205269664]
Social biases can manifest in language agency.
We introduce the novel Language Agency Bias Evaluation benchmark.
We unveil language agency social biases in 3 recent Large Language Model (LLM)-generated content.
arXiv Detail & Related papers (2024-04-16T12:27:54Z) - Disclosure and Mitigation of Gender Bias in LLMs [64.79319733514266]
Large Language Models (LLMs) can generate biased responses.
We propose an indirect probing framework based on conditional generation.
We explore three distinct strategies to disclose explicit and implicit gender bias in LLMs.
arXiv Detail & Related papers (2024-02-17T04:48:55Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Disinformation Capabilities of Large Language Models [0.564232659769944]
This paper presents a study of the disinformation capabilities of the current generation of large language models (LLMs)
We evaluated the capabilities of 10 LLMs using 20 disinformation narratives.
We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.
arXiv Detail & Related papers (2023-11-15T10:25:30Z) - Probing Explicit and Implicit Gender Bias through LLM Conditional Text
Generation [64.79319733514266]
Large Language Models (LLMs) can generate biased and toxic responses.
We propose a conditional text generation mechanism without the need for predefined gender phrases and stereotypes.
arXiv Detail & Related papers (2023-11-01T05:31:46Z) - "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
LLM-Generated Reference Letters [97.11173801187816]
Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content.
This paper critically examines gender biases in LLM-generated reference letters.
arXiv Detail & Related papers (2023-10-13T16:12:57Z) - Fake News Detectors are Biased against Texts Generated by Large Language
Models [39.36284616311687]
The spread of fake news has emerged as a critical challenge, undermining trust and posing threats to society.
We present a novel paradigm to evaluate fake news detectors in scenarios involving both human-written and LLM-generated misinformation.
arXiv Detail & Related papers (2023-09-15T18:04:40Z) - Queer People are People First: Deconstructing Sexual Identity
Stereotypes in Large Language Models [3.974379576408554]
Large Language Models (LLMs) are trained primarily on minimally processed web text.
LLMs can inadvertently perpetuate stereotypes towards marginalized groups, like the LGBTQIA+ community.
arXiv Detail & Related papers (2023-06-30T19:39:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.