Helping users discover perspectives: Enhancing opinion mining with joint
topic models
- URL: http://arxiv.org/abs/2010.12505v2
- Date: Wed, 28 Apr 2021 20:28:16 GMT
- Title: Helping users discover perspectives: Enhancing opinion mining with joint
topic models
- Authors: Tim Draws, Jody Liu, Nava Tintarev
- Abstract summary: This paper explores how opinion mining can be enhanced with joint topic modeling.
We evaluate four joint topic models (TAM, JST, VODUM, and LAM) in a user study assessing human understandability of the extracted perspectives.
- Score: 5.2424255020469595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Support or opposition concerning a debated claim such as abortion should be
legal can have different underlying reasons, which we call perspectives. This
paper explores how opinion mining can be enhanced with joint topic modeling, to
identify distinct perspectives within the topic, providing an informative
overview from unstructured text. We evaluate four joint topic models (TAM, JST,
VODUM, and LAM) in a user study assessing human understandability of the
extracted perspectives. Based on the results, we conclude that joint topic
models such as TAM can discover perspectives that align with human judgments.
Moreover, our results suggest that users are not influenced by their
pre-existing stance on the topic of abortion when interpreting the output of
topic models.
Related papers
- ArguMentor: Augmenting User Experiences with Counter-Perspectives [4.84187718353576]
We designed ArguMentor, a human-AI collaboration system that highlights claims in opinion pieces.
It identifies counter-arguments for them using a LLM and generates a context-based summary of based on current events.
Our evaluation shows that participants can generate more arguments and counter-arguments and have, on average, have more moderate views after engaging with the system.
arXiv Detail & Related papers (2024-06-04T21:43:56Z) - PAKT: Perspectivized Argumentation Knowledge Graph and Tool for Deliberation Analysis (with Supplementary Materials) [18.436817251174357]
We propose PAKT, a Perspectivized Argumentation Knowledge Graph and Tool.
The graph structures the argumentative space across diverse topics, where arguments are divided into premises and conclusions.
We show how to construct PAKT and conduct case studies on the obtained multifaceted argumentation graph.
arXiv Detail & Related papers (2024-04-16T13:47:19Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Fostering User Engagement in the Critical Reflection of Arguments [3.26297440422721]
We propose a system that engages in a deliberative dialogue with a human.
We enable the system to intervene if the user is too focused on their pre-existing opinion.
We report on a user study with 58 participants to test our model and the effect of the intervention mechanism.
arXiv Detail & Related papers (2023-08-17T15:48:23Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis [54.735400754548635]
ABSA aims to predict the sentiment expressed in a review with respect to a given aspect.
The given aspect should be considered as a new clue out of context in the context modeling process.
We design several aspect-aware context encoders based on different backbones.
arXiv Detail & Related papers (2021-06-21T02:26:03Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - A Disentangled Adversarial Neural Topic Model for Separating Opinions
from Plots in User Reviews [35.802290746473524]
We propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones.
We conduct an experimental assessment introducing a new collection of movie and book reviews paired with their plots.
Showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models.
arXiv Detail & Related papers (2020-10-22T02:15:13Z) - A model to support collective reasoning: Formalization, analysis and
computational assessment [1.126958266688732]
We propose a new model to represent human debates and methods to obtain collective conclusions from them.
This model overcomes drawbacks of existing approaches by allowing users to introduce new pieces of information into the discussion.
We show that aggregated opinions can be coherent even if there is a lack of consensus.
arXiv Detail & Related papers (2020-07-14T06:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.