Adversarial Learning of Poisson Factorisation Model for Gauging Brand
Sentiment in User Reviews
- URL: http://arxiv.org/abs/2101.10150v1
- Date: Mon, 25 Jan 2021 14:58:17 GMT
- Title: Adversarial Learning of Poisson Factorisation Model for Gauging Brand
Sentiment in User Reviews
- Authors: Runcong Zhao and Lin Gui and Gabriele Pergola and Yulan He
- Abstract summary: We propose the Brand-Topic Model (BTM) which aims to detect brand-associated polarity-bearing topics from product reviews.
BTM is able to automatically infer real-valued brand-associated sentiment scores and generate fine-grained sentiment-topics.
It has been evaluated on a dataset constructed from Amazon reviews.
- Score: 15.047213517681936
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we propose the Brand-Topic Model (BTM) which aims to detect
brand-associated polarity-bearing topics from product reviews. Different from
existing models for sentiment-topic extraction which assume topics are grouped
under discrete sentiment categories such as `positive', `negative' and
`neural', BTM is able to automatically infer real-valued brand-associated
sentiment scores and generate fine-grained sentiment-topics in which we can
observe continuous changes of words under a certain topic (e.g., `shaver' or
`cream') while its associated sentiment gradually varies from negative to
positive. BTM is built on the Poisson factorisation model with the
incorporation of adversarial learning. It has been evaluated on a dataset
constructed from Amazon reviews. Experimental results show that BTM outperforms
a number of competitive baselines in brand ranking, achieving a better balance
of topic coherence and uniqueness, and extracting better-separated
polarity-bearing topics.
Related papers
- A Multifacet Hierarchical Sentiment-Topic Model with Application to Multi-Brand Online Review Analysis [6.661618396933143]
The proposed method is built on a unified generative framework that explains review words with a hierarchical brand-associated topic model.
A novel hierarchical Polya urn (HPU) scheme is proposed to enhance the topic-word association among topic hierarchy.
Experimental studies demonstrate that the proposed method can be effective in detecting reasonable topic hierarchy and deriving accurate brand-associated rankings.
arXiv Detail & Related papers (2025-02-26T08:30:06Z) - Rethinking Relation Extraction: Beyond Shortcuts to Generalization with a Debiased Benchmark [53.876493664396506]
Benchmarks are crucial for evaluating machine learning algorithm performance, facilitating comparison and identifying superior solutions.
This paper addresses the issue of entity bias in relation extraction tasks, where models tend to rely on entity mentions rather than context.
We propose a debiased relation extraction benchmark DREB that breaks the pseudo-correlation between entity mentions and relation types through entity replacement.
To establish a new baseline on DREB, we introduce MixDebias, a debiasing method combining data-level and model training-level techniques.
arXiv Detail & Related papers (2025-01-02T17:01:06Z) - Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - Language Model Preference Evaluation with Multiple Weak Evaluators [78.53743237977677]
GED (Preference Graph Ensemble and Denoise) is a novel approach that leverages multiple model-based evaluators to construct preference graphs.
We show that GED outperforms baseline methods in model ranking, response selection, and model alignment tasks.
arXiv Detail & Related papers (2024-10-14T01:57:25Z) - Addressing Topic Leakage in Cross-Topic Evaluation for Authorship Verification [7.467445326172115]
Authorship verification (AV) aims to identify whether a pair of texts has the same author.
conventional evaluation assumes minimal topic overlap between training and test data.
We argue that there can still be topic leakage in test data, causing misleading model performance and unstable rankings.
arXiv Detail & Related papers (2024-07-27T04:16:11Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation [57.8363998797433]
We propose AMRFact, a framework that generates perturbed summaries using Abstract Meaning Representations (AMRs)
Our approach parses factually consistent summaries into AMR graphs and injects controlled factual inconsistencies to create negative examples, allowing for coherent factually inconsistent summaries to be generated with high error-type coverage.
arXiv Detail & Related papers (2023-11-16T02:56:29Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - Tracking Brand-Associated Polarity-Bearing Topics in User Reviews [28.574971754268]
dBTM is able to automatically detect and track brand-associated sentiment scores and polarity-bearing topics from product reviews organised in temporally-ordered time intervals.
It has been evaluated on a dataset constructed from MakeupAlley reviews and a hotel review dataset.
arXiv Detail & Related papers (2023-01-03T18:30:34Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Synthesizing Adversarial Negative Responses for Robust Response Ranking
and Evaluation [34.52276336319678]
Open-domain neural dialogue models have achieved high performance in response ranking and evaluation tasks.
Over-reliance on content similarity makes the models less sensitive to the presence of inconsistencies.
We propose approaches for automatically creating adversarial negative training data.
arXiv Detail & Related papers (2021-06-10T16:20:55Z) - A Disentangled Adversarial Neural Topic Model for Separating Opinions
from Plots in User Reviews [35.802290746473524]
We propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones.
We conduct an experimental assessment introducing a new collection of movie and book reviews paired with their plots.
Showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models.
arXiv Detail & Related papers (2020-10-22T02:15:13Z) - Generator and Critic: A Deep Reinforcement Learning Approach for Slate
Re-ranking in E-commerce [17.712394984304336]
We present a novel Generator and Critic slate re-ranking approach, where the Critic evaluates the slate and the Generator ranks the items by the reinforcement learning approach.
For the Generator, to tackle the problem of large action space, we propose a new exploration reinforcement learning algorithm, called PPO-Exploration.
arXiv Detail & Related papers (2020-05-25T16:24:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.