Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking
- URL: http://arxiv.org/abs/2402.05880v2
- Date: Sat, 10 Feb 2024 17:03:58 GMT
- Title: Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking
- Authors: Nikhil Sharma, Q. Vera Liao, Ziang Xiao
- Abstract summary: Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
- Score: 49.02867094432589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) powered conversational search systems have
already been used by hundreds of millions of people, and are believed to bring
many benefits over conventional search. However, while decades of research and
public discourse interrogated the risk of search systems in increasing
selective exposure and creating echo chambers -- limiting exposure to diverse
opinions and leading to opinion polarization, little is known about such a risk
of LLM-powered conversational search. We conduct two experiments to
investigate: 1) whether and how LLM-powered conversational search increases
selective exposure compared to conventional search; 2) whether and how LLMs
with opinion biases that either reinforce or challenge the user's view change
the effect. Overall, we found that participants engaged in more biased
information querying with LLM-powered conversational search, and an opinionated
LLM reinforcing their views exacerbated this bias. These results present
critical implications for the development of LLMs and conversational search
systems, and the policy governing these technologies.
Related papers
- Retrieving Implicit and Explicit Emotional Events Using Large Language Models [4.245183693179267]
Large language models (LLMs) have garnered significant attention in recent years due to their impressive performance.
This study investigates LLMs' emotion retrieval capabilities in commonsense.
arXiv Detail & Related papers (2024-10-24T19:56:28Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment [37.985947029716016]
Large language models (LLMs) have shown advanced understanding capabilities but may inherit human biases from their training data.
We investigated whether LLMs are influenced by the threshold priming effect in relevance judgments.
arXiv Detail & Related papers (2024-09-24T12:23:15Z) - Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models [70.19081534515371]
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks.
They generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences.
We propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers.
arXiv Detail & Related papers (2024-07-04T18:47:42Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Factuality of Large Language Models: A Survey [29.557596701431827]
We critically analyze existing work with the aim to identify the major challenges and their associated causes.
We analyze the obstacles to automated factuality evaluation for open-ended text generation.
arXiv Detail & Related papers (2024-02-04T09:36:31Z) - Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits [1.4003044924094596]
We conduct a user study engaging over 40 individuals across all age groups in price negotiations with an LLM.
We show that the negotiated prices humans manage to achieve span a broad range, which points to a literacy gap in effectively interacting with LLMs.
arXiv Detail & Related papers (2023-11-26T08:44:58Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.