Examining Bias in Opinion Summarisation Through the Perspective of
Opinion Diversity
- URL: http://arxiv.org/abs/2306.04424v1
- Date: Wed, 7 Jun 2023 13:31:02 GMT
- Title: Examining Bias in Opinion Summarisation Through the Perspective of
Opinion Diversity
- Authors: Nannan Huang, Lin Tian, Haytham Fayek, Xiuzhen Zhang
- Abstract summary: We study bias in opinion summarisation from the perspective of opinion diversity.
We examine opinion similarity, a measure of how closely related two opinions are in terms of their stance on a given topic.
- Score: 7.16988671744865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Opinion summarisation is a task that aims to condense the information
presented in the source documents while retaining the core message and
opinions. A summary that only represents the majority opinions will leave the
minority opinions unrepresented in the summary. In this paper, we use the
stance towards a certain target as an opinion. We study bias in opinion
summarisation from the perspective of opinion diversity, which measures whether
the model generated summary can cover a diverse set of opinions. In addition,
we examine opinion similarity, a measure of how closely related two opinions
are in terms of their stance on a given topic, and its relationship with
opinion diversity. Through the lens of stances towards a topic, we examine
opinion diversity and similarity using three debatable topics under COVID-19.
Experimental results on these topics revealed that a higher degree of
similarity of opinions did not indicate good diversity or fairly cover the
various opinions originally presented in the source documents. We found that
BART and ChatGPT can better capture diverse opinions presented in the source
documents.
Related papers
- My part is bigger than yours -- assessment within a group of peers using the pairwise comparisons method [0.0]
A project (e.g. writing a collaborative research paper) is often a group effort. At the end, each contributor identifies his or her contribution, often verbally.
This leads to the question of what (percentage) share in the creation of the paper is due to individual authors.
We present a simple models that allows aggregation of experts' opinions linking the priority of his preference directly to the assessment made by other experts.
arXiv Detail & Related papers (2024-07-01T22:54:51Z) - Fair Abstractive Summarization of Diverse Perspectives [103.08300574459783]
A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups.
We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people.
We propose four reference-free automatic metrics by measuring the differences between target and source perspectives.
arXiv Detail & Related papers (2023-11-14T03:38:55Z) - Automatically Evaluating Opinion Prevalence in Opinion Summarization [0.9971537447334835]
We propose an automatic metric to test the prevalence of the opinions that a summary expresses.
We consider several existing methods to score the factual consistency of a summary statement.
We show that a human authored summary has only slightly better opinion prevalence than randomly selected extracts from the source reviews.
arXiv Detail & Related papers (2023-07-26T17:13:00Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - Fine-Grained Opinion Summarization with Minimal Supervision [48.43506393052212]
FineSum aims to profile a target by extracting opinions from multiple documents.
FineSum automatically identifies opinion phrases from the raw corpus, classifies them into different aspects and sentiments, and constructs multiple fine-grained opinion clusters under each aspect/sentiment.
Both automatic evaluation on the benchmark and quantitative human evaluation validate the effectiveness of our approach.
arXiv Detail & Related papers (2021-10-17T15:16:34Z) - Learning Opinion Summarizers by Selecting Informative Reviews [81.47506952645564]
We collect a large dataset of summaries paired with user reviews for over 31,000 products, enabling supervised training.
The content of many reviews is not reflected in the human-written summaries, and, thus, the summarizer trained on random review subsets hallucinates.
We formulate the task as jointly learning to select informative subsets of reviews and summarizing the opinions expressed in these subsets.
arXiv Detail & Related papers (2021-09-09T15:01:43Z) - MultiOpEd: A Corpus of Multi-Perspective News Editorials [46.86995662807853]
MultiOpEd is an open-domain news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials.
We study the problem of perspective summarization in a multi-task learning setting, as a case study.
We show that, with the induced tasks as auxiliary tasks, we can improve the quality of the perspective summary generated.
arXiv Detail & Related papers (2021-06-04T21:23:22Z) - Operationalizing Framing to Support Multiperspective Recommendations of
Opinion Pieces [1.3286165491120467]
We operationalize the notion of framing, adopted from communication science.
We apply this notion to a re-ranking of topic-relevant recommended lists.
Our offline evaluation indicates that the proposed method is capable of enhancing the viewpoint diversity of recommendation lists.
arXiv Detail & Related papers (2021-01-15T14:40:34Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z) - Towards Quantifying the Distance between Opinions [66.29568619199074]
We find that measures based solely on text similarity or on overall sentiment often fail to effectively capture the distance between opinions.
We propose a new distance measure for capturing the similarity between opinions that leverages the nuanced observation.
In an unsupervised setting, our distance measure achieves significantly better Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x) compared to existing approaches.
arXiv Detail & Related papers (2020-01-27T16:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.