Explain and Conquer: Personalised Text-based Reviews to Achieve
Transparency
- URL: http://arxiv.org/abs/2205.01759v1
- Date: Tue, 3 May 2022 20:04:32 GMT
- Title: Explain and Conquer: Personalised Text-based Reviews to Achieve
Transparency
- Authors: I\~nigo L\'opez-Riob\'oo Botana (1), Ver\'onica Bol\'on-Canedo (1),
Bertha Guijarro-Berdi\~nas (1), Amparo Alonso-Betanzos (1) ((1) University of
A Coru\~na - Research Center on Information and Communication Technologies
(CITIC))
- Abstract summary: We have focused on the TripAdvisor platform, considering the applicability to other dyadic data contexts.
Our aim is to represent and explain pairs (user, restaurant) established by agents (e.g., a recommender system or a paid promotion mechanism) so that personalisation is taken into account.
We propose the PTER (Personalised TExt-based Reviews) model. We predict, from the available reviews for a given restaurant, those that fit to the specific user interactions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There are many contexts where dyadic data is present. Social networking is a
well-known example, where transparency has grown on importance. In these
contexts, pairs of items are linked building a network where interactions play
a crucial role. Explaining why these relationships are established is core to
address transparency. These explanations are often presented using text, thanks
to the spread of the natural language understanding tasks.
We have focused on the TripAdvisor platform, considering the applicability to
other dyadic data contexts. The items are a subset of users and restaurants and
the interactions the reviews posted by these users. Our aim is to represent and
explain pairs (user, restaurant) established by agents (e.g., a recommender
system or a paid promotion mechanism), so that personalisation is taken into
account. We propose the PTER (Personalised TExt-based Reviews) model. We
predict, from the available reviews for a given restaurant, those that fit to
the specific user interactions.
PTER leverages the BERT (Bidirectional Encoders Representations from
Transformers) language model. We customised a deep neural network following the
feature-based approach. The performance metrics show the validity of our
labelling proposal. We defined an evaluation framework based on a clustering
process to assess our personalised representation. PTER clearly outperforms the
proposed adversary in 5 of the 6 datasets, with a minimum ratio improvement of
4%.
Related papers
- Towards Bridging Review Sparsity in Recommendation with Textual Edge Graph Representation [28.893058826607735]
We propose a unified framework that imputes missing reviews by jointly modeling semantic and structural signals.<n>Experiments on the Amazon and Goodreads datasets show that TWISTER consistently outperforms traditional numeric, graph-based, and LLM baselines.<n>In summary, TWISTER generates reviews that are more helpful, authentic, and specific, while smoothing structural signals for improved recommendations.
arXiv Detail & Related papers (2025-08-02T00:53:40Z) - A Personalized Conversational Benchmark: Towards Simulating Personalized Conversations [112.81207927088117]
PersonaConvBench is a benchmark for evaluating personalized reasoning and generation in multi-turn conversations with large language models (LLMs)<n>We benchmark several commercial and open-source LLMs under a unified prompting setup and observe that incorporating personalized history yields substantial performance improvements.
arXiv Detail & Related papers (2025-05-20T09:13:22Z) - Prompt-based Personality Profiling: Reinforcement Learning for Relevance Filtering [8.20929362102942]
Author profiling is the task of inferring characteristics about individuals by analyzing content they share.
We propose a new method for author profiling which aims at distinguishing relevant from irrelevant content first, followed by the actual user profiling only with relevant data.
We evaluate our method for Big Five personality trait prediction on two Twitter corpora.
arXiv Detail & Related papers (2024-09-06T08:43:10Z) - Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations [35.77028281332307]
We propose Contribution-Aware Multimodal User Embedding (CAMUE) for social networks.
We show that our approach can provide personalized explainable predictions, automatically mitigating the impact of unreliable information.
Our work paves the way for more explainable, reliable, and effective social media user embedding.
arXiv Detail & Related papers (2024-09-04T02:17:32Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Multi-Modal Subjective Context Modelling and Recognition [19.80579219657159]
We present a novel ontological context model that captures five dimensions, namely time, location, activity, social relations and object.
An initial context recognition experiment on real-world data hints at the promise of our model.
arXiv Detail & Related papers (2020-11-19T05:42:03Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z) - BERT-based Ensembles for Modeling Disclosure and Support in
Conversational Social Media Text [9.475039534437332]
We introduce a predictive ensemble model exploiting the finetuned contextualized word embeddings, RoBERTa and ALBERT.
We show that our model outperforms the base models in all considered metrics, achieving an improvement of $3%$ in the F1 score.
arXiv Detail & Related papers (2020-06-01T19:52:01Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.