Use large language models to promote equity
- URL: http://arxiv.org/abs/2312.14804v1
- Date: Fri, 22 Dec 2023 16:26:20 GMT
- Title: Use large language models to promote equity
- Authors: Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica
Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky,
Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini
Suresh, Keyon Vafa
- Abstract summary: Large language models (LLMs) have driven an explosion of interest about their societal impacts.
Much of the discourse around how they will impact social equity has been cautionary or negative.
This is a vital discussion: the ways in which AI generally, and LLMs specifically, can entrench biases have been well-documented.
But equally vital, and much less discussed, is the more opportunity-focused counterpoint: "what promising applications do LLMs enable that could promote equity?"
- Score: 40.183853467716766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in large language models (LLMs) have driven an explosion of interest
about their societal impacts. Much of the discourse around how they will impact
social equity has been cautionary or negative, focusing on questions like "how
might LLMs be biased and how would we mitigate those biases?" This is a vital
discussion: the ways in which AI generally, and LLMs specifically, can entrench
biases have been well-documented. But equally vital, and much less discussed,
is the more opportunity-focused counterpoint: "what promising applications do
LLMs enable that could promote equity?" If LLMs are to enable a more equitable
world, it is not enough just to play defense against their biases and failure
modes. We must also go on offense, applying them positively to equity-enhancing
use cases to increase opportunities for underserved groups and reduce societal
discrimination. There are many choices which determine the impact of AI, and a
fundamental choice very early in the pipeline is the problems we choose to
apply it to. If we focus only later in the pipeline -- making LLMs marginally
more fair as they facilitate use cases which intrinsically entrench power -- we
will miss an important opportunity to guide them to equitable impacts. Here, we
highlight the emerging potential of LLMs to promote equity by presenting four
newly possible, promising research directions, while keeping risks and
cautionary points in clear view.
Related papers
- Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models [50.16340812031201]
We show that large language models (LLMs) do not update their beliefs as expected from the Bayesian framework.
We teach the LLMs to reason in a Bayesian manner by training them to mimic the predictions of an optimal Bayesian model.
arXiv Detail & Related papers (2025-03-21T20:13:04Z) - Unequal Opportunities: Examining the Bias in Geographical Recommendations by Large Language Models [11.585115320816257]
This study examines the biases present in Large Language Models (LLMs) recommendations of U.S. cities and towns.
We focus on the consistency of LLMs responses and their tendency to over-represent or under-represent specific locations.
Our findings point to consistent demographic biases in these recommendations, which could perpetuate a rich-get-richer'' effect that widens existing economic disparities.
arXiv Detail & Related papers (2025-03-16T18:59:00Z) - Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values [13.798198972161657]
A number of societal problems involve the distribution of resources, where fairness, along with economic efficiency, play a critical role in the desirability of outcomes.
This paper examines whether large language models (LLMs) adhere to fundamental fairness concepts and investigate their alignment with human preferences.
arXiv Detail & Related papers (2025-02-01T04:24:47Z) - Should You Use Your Large Language Model to Explore or Exploit? [55.562545113247666]
We evaluate the ability of large language models to help a decision-making agent facing an exploration-exploitation tradeoff.
We find that while the current LLMs often struggle to exploit, in-context mitigations may be used to substantially improve performance for small-scale tasks.
arXiv Detail & Related papers (2025-01-31T23:42:53Z) - Observing Micromotives and Macrobehavior of Large Language Models [14.649811719084505]
We follow Schelling's model of segregation to observe the relationship between the micromotives and macrobehavior of large language models.
Our results indicate that, regardless of the level of bias in LLMs, a highly segregated society will emerge as more people follow LLMs' suggestions.
arXiv Detail & Related papers (2024-12-10T23:25:14Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge [84.34545223897578]
Despite their excellence in many domains, potential issues are under-explored, undermining their reliability and the scope of their utility.
We identify 12 key potential biases and propose a new automated bias quantification framework-CALM- which quantifies and analyzes each type of bias in LLM-as-a-Judge.
Our work highlights the need for stakeholders to address these issues and remind users to exercise caution in LLM-as-a-Judge applications.
arXiv Detail & Related papers (2024-10-03T17:53:30Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - How Susceptible are Large Language Models to Ideological Manipulation? [14.598848573524549]
Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information.
This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated.
arXiv Detail & Related papers (2024-02-18T22:36:19Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - A Group Fairness Lens for Large Language Models [34.0579082699443]
Large language models can perpetuate biases and unfairness when deployed in social media contexts.
We propose evaluating LLM biases from a group fairness lens using a novel hierarchical schema characterizing diverse social groups.
We pioneer a novel chain-of-thought method GF-Think to mitigate biases of LLMs from a group fairness perspective.
arXiv Detail & Related papers (2023-12-24T13:25:15Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - A Survey on Fairness in Large Language Models [28.05516809190299]
Large Language Models (LLMs) have shown powerful performance and development prospects.
LLMs can capture social biases from unprocessed training data and propagate the biases to downstream tasks.
Unfair LLM systems have undesirable social impacts and potential harms.
arXiv Detail & Related papers (2023-08-20T03:30:22Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.