Use large language models to promote equity
- URL: http://arxiv.org/abs/2312.14804v1
- Date: Fri, 22 Dec 2023 16:26:20 GMT
- Title: Use large language models to promote equity
- Authors: Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica
Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky,
Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini
Suresh, Keyon Vafa
- Abstract summary: Large language models (LLMs) have driven an explosion of interest about their societal impacts.
Much of the discourse around how they will impact social equity has been cautionary or negative.
This is a vital discussion: the ways in which AI generally, and LLMs specifically, can entrench biases have been well-documented.
But equally vital, and much less discussed, is the more opportunity-focused counterpoint: "what promising applications do LLMs enable that could promote equity?"
- Score: 40.183853467716766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in large language models (LLMs) have driven an explosion of interest
about their societal impacts. Much of the discourse around how they will impact
social equity has been cautionary or negative, focusing on questions like "how
might LLMs be biased and how would we mitigate those biases?" This is a vital
discussion: the ways in which AI generally, and LLMs specifically, can entrench
biases have been well-documented. But equally vital, and much less discussed,
is the more opportunity-focused counterpoint: "what promising applications do
LLMs enable that could promote equity?" If LLMs are to enable a more equitable
world, it is not enough just to play defense against their biases and failure
modes. We must also go on offense, applying them positively to equity-enhancing
use cases to increase opportunities for underserved groups and reduce societal
discrimination. There are many choices which determine the impact of AI, and a
fundamental choice very early in the pipeline is the problems we choose to
apply it to. If we focus only later in the pipeline -- making LLMs marginally
more fair as they facilitate use cases which intrinsically entrench power -- we
will miss an important opportunity to guide them to equitable impacts. Here, we
highlight the emerging potential of LLMs to promote equity by presenting four
newly possible, promising research directions, while keeping risks and
cautionary points in clear view.
Related papers
- The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition [74.04775677110179]
In-context Learning (ICL) has emerged as a powerful paradigm for performing natural language tasks with Large Language Models (LLM)
We show that LLMs have strong yet inconsistent priors in emotion recognition that ossify their predictions.
Our results suggest that caution is needed when using ICL with larger LLMs for affect-centered tasks outside their pre-training domain.
arXiv Detail & Related papers (2024-03-25T19:07:32Z) - How Susceptible are Large Language Models to Ideological Manipulation? [14.598848573524549]
Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information.
This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated.
arXiv Detail & Related papers (2024-02-18T22:36:19Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - A Group Fairness Lens for Large Language Models [34.0579082699443]
Large language models can perpetuate biases and unfairness when deployed in social media contexts.
We propose evaluating LLM biases from a group fairness lens using a novel hierarchical schema characterizing diverse social groups.
We pioneer a novel chain-of-thought method GF-Think to mitigate biases of LLMs from a group fairness perspective.
arXiv Detail & Related papers (2023-12-24T13:25:15Z) - The ART of LLM Refinement: Ask, Refine, and Trust [85.75059530612882]
We propose a reasoning with refinement objective called ART: Ask, Refine, and Trust.
It asks necessary questions to decide when an LLM should refine its output.
It achieves a performance gain of +5 points over self-refinement baselines.
arXiv Detail & Related papers (2023-11-14T07:26:32Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - A Survey on Fairness in Large Language Models [28.05516809190299]
Large Language Models (LLMs) have shown powerful performance and development prospects.
LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks.
Unfair LLM systems have undesirable social impacts and potential harms.
arXiv Detail & Related papers (2023-08-20T03:30:22Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.