Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking
- URL: http://arxiv.org/abs/2402.05880v2
- Date: Sat, 10 Feb 2024 17:03:58 GMT
- Title: Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking
- Authors: Nikhil Sharma, Q. Vera Liao, Ziang Xiao
- Abstract summary: Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
- Score: 49.02867094432589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) powered conversational search systems have
already been used by hundreds of millions of people, and are believed to bring
many benefits over conventional search. However, while decades of research and
public discourse interrogated the risk of search systems in increasing
selective exposure and creating echo chambers -- limiting exposure to diverse
opinions and leading to opinion polarization, little is known about such a risk
of LLM-powered conversational search. We conduct two experiments to
investigate: 1) whether and how LLM-powered conversational search increases
selective exposure compared to conventional search; 2) whether and how LLMs
with opinion biases that either reinforce or challenge the user's view change
the effect. Overall, we found that participants engaged in more biased
information querying with LLM-powered conversational search, and an opinionated
LLM reinforcing their views exacerbated this bias. These results present
critical implications for the development of LLMs and conversational search
systems, and the policy governing these technologies.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions [20.44227547555244]
Large language models (LLMs) have led many researchers to consider their usage for scientific work.
We present the first large-scale survey of 816 verified research article authors.
We find that 81% of researchers have already incorporated LLMs into different aspects of their research workflow.
arXiv Detail & Related papers (2024-10-30T04:25:23Z) - Retrieving Implicit and Explicit Emotional Events Using Large Language Models [4.245183693179267]
Large language models (LLMs) have garnered significant attention in recent years due to their impressive performance.
This study investigates LLMs' emotion retrieval capabilities in commonsense.
arXiv Detail & Related papers (2024-10-24T19:56:28Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment [37.985947029716016]
Large language models (LLMs) have shown advanced understanding capabilities but may inherit human biases from their training data.
We investigated whether LLMs are influenced by the threshold priming effect in relevance judgments.
arXiv Detail & Related papers (2024-09-24T12:23:15Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.