Examining Bias in Opinion Summarisation Through the Perspective of
Opinion Diversity
- URL: http://arxiv.org/abs/2306.04424v1
- Date: Wed, 7 Jun 2023 13:31:02 GMT
- Title: Examining Bias in Opinion Summarisation Through the Perspective of
Opinion Diversity
- Authors: Nannan Huang, Lin Tian, Haytham Fayek, Xiuzhen Zhang
- Abstract summary: We study bias in opinion summarisation from the perspective of opinion diversity.
We examine opinion similarity, a measure of how closely related two opinions are in terms of their stance on a given topic.
- Score: 7.16988671744865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Opinion summarisation is a task that aims to condense the information
presented in the source documents while retaining the core message and
opinions. A summary that only represents the majority opinions will leave the
minority opinions unrepresented in the summary. In this paper, we use the
stance towards a certain target as an opinion. We study bias in opinion
summarisation from the perspective of opinion diversity, which measures whether
the model generated summary can cover a diverse set of opinions. In addition,
we examine opinion similarity, a measure of how closely related two opinions
are in terms of their stance on a given topic, and its relationship with
opinion diversity. Through the lens of stances towards a topic, we examine
opinion diversity and similarity using three debatable topics under COVID-19.
Experimental results on these topics revealed that a higher degree of
similarity of opinions did not indicate good diversity or fairly cover the
various opinions originally presented in the source documents. We found that
BART and ChatGPT can better capture diverse opinions presented in the source
documents.
Related papers
- Causal Effect of Group Diversity on Redundancy and Coverage in Peer-Reviewing [28.370725937271448]
We conduct a study of different measures of reviewer diversity on review coverage and redundancy.
We find no evidence of an increase in coverage for reviewer slates with reviewers from diverse organizations or geographical locations.
Our study adopts a group decision-making perspective for reviewer assignments in peer review and suggests dimensions of diversity that can help guide the reviewer assignment process.
arXiv Detail & Related papers (2024-11-18T10:08:10Z) - Overview of PerpectiveArg2024: The First Shared Task on Perspective Argument Retrieval [56.66761232081188]
We present a novel dataset covering demographic and socio-cultural (socio) variables, such as age, gender, and political attitude, representing minority and majority groups in society.
We find substantial challenges in incorporating perspectivism, especially when aiming for personalization based solely on the text of arguments without explicitly providing socio profiles.
While we bootstrap perspective argument retrieval, further research is essential to optimize retrieval systems to facilitate personalization and reduce polarization.
arXiv Detail & Related papers (2024-07-29T03:14:57Z) - On the Principles behind Opinion Dynamics in Multi-Agent Systems of Large Language Models [2.8282906214258805]
We study the evolution of opinions inside a population of interacting large language models (LLMs)
We identify biases that drive the exchange of opinions based on the LLM's tendency to find consensus with the other LLM's opinion.
We find these biases are affected by the perceived absence of compelling reasons for opinion change, the perceived willingness to engage in discussion, and the distribution of allocation values.
arXiv Detail & Related papers (2024-06-18T18:37:23Z) - Fair Abstractive Summarization of Diverse Perspectives [103.08300574459783]
A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups.
We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people.
We propose four reference-free automatic metrics by measuring the differences between target and source perspectives.
arXiv Detail & Related papers (2023-11-14T03:38:55Z) - Automatically Evaluating Opinion Prevalence in Opinion Summarization [0.9971537447334835]
We propose an automatic metric to test the prevalence of the opinions that a summary expresses.
We consider several existing methods to score the factual consistency of a summary statement.
We show that a human authored summary has only slightly better opinion prevalence than randomly selected extracts from the source reviews.
arXiv Detail & Related papers (2023-07-26T17:13:00Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - Fine-Grained Opinion Summarization with Minimal Supervision [48.43506393052212]
FineSum aims to profile a target by extracting opinions from multiple documents.
FineSum automatically identifies opinion phrases from the raw corpus, classifies them into different aspects and sentiments, and constructs multiple fine-grained opinion clusters under each aspect/sentiment.
Both automatic evaluation on the benchmark and quantitative human evaluation validate the effectiveness of our approach.
arXiv Detail & Related papers (2021-10-17T15:16:34Z) - Learning Opinion Summarizers by Selecting Informative Reviews [81.47506952645564]
We collect a large dataset of summaries paired with user reviews for over 31,000 products, enabling supervised training.
The content of many reviews is not reflected in the human-written summaries, and, thus, the summarizer trained on random review subsets hallucinates.
We formulate the task as jointly learning to select informative subsets of reviews and summarizing the opinions expressed in these subsets.
arXiv Detail & Related papers (2021-09-09T15:01:43Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z) - Towards Quantifying the Distance between Opinions [66.29568619199074]
We find that measures based solely on text similarity or on overall sentiment often fail to effectively capture the distance between opinions.
We propose a new distance measure for capturing the similarity between opinions that leverages the nuanced observation.
In an unsupervised setting, our distance measure achieves significantly better Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x) compared to existing approaches.
arXiv Detail & Related papers (2020-01-27T16:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.