Enhanced Review Detection and Recognition: A Platform-Agnostic Approach with Application to Online Commerce
- URL: http://arxiv.org/abs/2405.06704v1
- Date: Thu, 9 May 2024 00:32:22 GMT
- Title: Enhanced Review Detection and Recognition: A Platform-Agnostic Approach with Application to Online Commerce
- Authors: Priyabrata Karmakar, John Hawkins,
- Abstract summary: We present a machine learning methodology for review detection and extraction.
We demonstrate that it generalises for use across websites that were not contained in the training data.
This method promises to drive applications for automatic detection and evaluation of reviews, regardless of their source.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online commerce relies heavily on user generated reviews to provide unbiased information about products that they have not physically seen. The importance of reviews has attracted multiple exploitative online behaviours and requires methods for monitoring and detecting reviews. We present a machine learning methodology for review detection and extraction, and demonstrate that it generalises for use across websites that were not contained in the training data. This method promises to drive applications for automatic detection and evaluation of reviews, regardless of their source. Furthermore, we showcase the versatility of our method by implementing and discussing three key applications for analysing reviews: Sentiment Inconsistency Analysis, which detects and filters out unreliable reviews based on inconsistencies between ratings and comments; Multi-language support, enabling the extraction and translation of reviews from various languages without relying on HTML scraping; and Fake review detection, achieved by integrating a trained NLP model to identify and distinguish between genuine and fake reviews.
Related papers
- What Matters in Explanations: Towards Explainable Fake Review Detection Focusing on Transformers [45.55363754551388]
Customers' reviews and feedback play crucial role on e-commerce platforms like Amazon, Zalando, and eBay.
There is a prevailing concern that sellers often post fake or spam reviews to deceive potential customers and manipulate their opinions about a product.
We propose an explainable framework for detecting fake reviews with high precision in identifying fraudulent content with explanations.
arXiv Detail & Related papers (2024-07-24T13:26:02Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Unmasking Falsehoods in Reviews: An Exploration of NLP Techniques [0.0]
This research paper proposes a machine learning model to identify deceptive reviews.
To accomplish this, an n-gram model and max features are developed to effectively identify deceptive content.
The experimental results reveal that the passive aggressive classifier stands out among the various algorithms.
arXiv Detail & Related papers (2023-07-20T06:35:43Z) - DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection [55.70982767084996]
A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark.
We present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions.
DeepfakeBench contains 15 state-of-the-art detection methods, 9CL datasets, a series of deepfake detection evaluation protocols and analysis tools, as well as comprehensive evaluations.
arXiv Detail & Related papers (2023-07-04T01:34:41Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Online Fake Review Detection Using Supervised Machine Learning And BERT
Model [0.0]
We propose to use BERT (Bidirectional Representation from Transformers) model to extract word embeddings from texts (i.e. reviews)
The results indicate that the SVM classifiers outperform the others in terms of accuracy and f1-score with an accuracy of 87.81%.
arXiv Detail & Related papers (2023-01-09T09:40:56Z) - Mitigating Human and Computer Opinion Fraud via Contrastive Learning [0.0]
We introduce the novel approach towards fake text reviews detection in collaborative filtering recommender systems.
The existing algorithms concentrate on detecting the fake reviews, generated by language models and ignore the texts, written by dishonest users.
We propose the contrastive learning-based architecture, which utilizes the user demographic characteristics, along with the text reviews, as the additional evidence against fakes.
arXiv Detail & Related papers (2023-01-08T12:02:28Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Fake Reviews Detection through Analysis of Linguistic Features [1.609940380983903]
This paper explores a natural language processing approach to identify fake reviews.
We study 15 linguistic features for distinguishing fake and trustworthy online reviews.
We were able to discriminate fake from real reviews with high accuracy using these linguistic features.
arXiv Detail & Related papers (2020-10-08T21:16:30Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.