Fake Reviews Detection through Analysis of Linguistic Features
- URL: http://arxiv.org/abs/2010.04260v1
- Date: Thu, 8 Oct 2020 21:16:30 GMT
- Title: Fake Reviews Detection through Analysis of Linguistic Features
- Authors: Faranak Abri, Luis Felipe Gutierrez, Akbar Siami Namin, Keith S.
Jones, David R. W. Sears
- Abstract summary: This paper explores a natural language processing approach to identify fake reviews.
We study 15 linguistic features for distinguishing fake and trustworthy online reviews.
We were able to discriminate fake from real reviews with high accuracy using these linguistic features.
- Score: 1.609940380983903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online reviews play an integral part for success or failure of businesses.
Prior to purchasing services or goods, customers first review the online
comments submitted by previous customers. However, it is possible to
superficially boost or hinder some businesses through posting counterfeit and
fake reviews. This paper explores a natural language processing approach to
identify fake reviews. We present a detailed analysis of linguistic features
for distinguishing fake and trustworthy online reviews. We study 15 linguistic
features and measure their significance and importance towards the
classification schemes employed in this study. Our results indicate that fake
reviews tend to include more redundant terms and pauses, and generally contain
longer sentences. The application of several machine learning classification
algorithms revealed that we were able to discriminate fake from real reviews
with high accuracy using these linguistic features.
Related papers
- What Matters in Explanations: Towards Explainable Fake Review Detection Focusing on Transformers [45.55363754551388]
Customers' reviews and feedback play crucial role on e-commerce platforms like Amazon, Zalando, and eBay.
There is a prevailing concern that sellers often post fake or spam reviews to deceive potential customers and manipulate their opinions about a product.
We propose an explainable framework for detecting fake reviews with high precision in identifying fraudulent content with explanations.
arXiv Detail & Related papers (2024-07-24T13:26:02Z) - Enhanced Review Detection and Recognition: A Platform-Agnostic Approach with Application to Online Commerce [0.46040036610482665]
We present a machine learning methodology for review detection and extraction.
We demonstrate that it generalises for use across websites that were not contained in the training data.
This method promises to drive applications for automatic detection and evaluation of reviews, regardless of their source.
arXiv Detail & Related papers (2024-05-09T00:32:22Z) - Unmasking Falsehoods in Reviews: An Exploration of NLP Techniques [0.0]
This research paper proposes a machine learning model to identify deceptive reviews.
To accomplish this, an n-gram model and max features are developed to effectively identify deceptive content.
The experimental results reveal that the passive aggressive classifier stands out among the various algorithms.
arXiv Detail & Related papers (2023-07-20T06:35:43Z) - Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews
on Social Media [77.34726150561087]
We propose to leverage the high-quality elite Yelp reviews to generate fake reviews from the OpenAI GPT review creator.
We apply the model to predict non-elite reviews and identify the patterns across several dimensions.
We show that social media platforms are continuously challenged by machine-generated fake reviews.
arXiv Detail & Related papers (2023-02-10T19:40:10Z) - Mitigating Human and Computer Opinion Fraud via Contrastive Learning [0.0]
We introduce the novel approach towards fake text reviews detection in collaborative filtering recommender systems.
The existing algorithms concentrate on detecting the fake reviews, generated by language models and ignore the texts, written by dishonest users.
We propose the contrastive learning-based architecture, which utilizes the user demographic characteristics, along with the text reviews, as the additional evidence against fakes.
arXiv Detail & Related papers (2023-01-08T12:02:28Z) - Impact of Sentiment Analysis in Fake Review Detection [0.0]
We propose developing an initial research paper for investigating fake reviews by using sentiment analysis.
Ten research papers are identified that show fake reviews, and they discuss currently available solutions for predicting or detecting fake reviews.
arXiv Detail & Related papers (2022-12-18T03:17:47Z) - ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning [63.77667876176978]
Large language models show improved downstream task interpretability when prompted to generate step-by-step reasoning to justify their final answers.
These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness is difficult.
We present ROS, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics.
arXiv Detail & Related papers (2022-12-15T15:52:39Z) - Fake or Genuine? Contextualised Text Representation for Fake Review
Detection [0.4724825031148411]
This paper proposes a new ensemble model that employs transformer architecture to discover the hidden patterns in a sequence of fake reviews and detect them precisely.
The experimental results using semi-real benchmark datasets showed the superiority of the proposed model over state-of-the-art models.
arXiv Detail & Related papers (2021-12-29T00:54:47Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Curious Case of Language Generation Evaluation Metrics: A Cautionary
Tale [52.663117551150954]
A few popular metrics remain as the de facto metrics to evaluate tasks such as image captioning and machine translation.
This is partly due to ease of use, and partly because researchers expect to see them and know how to interpret them.
In this paper, we urge the community for more careful consideration of how they automatically evaluate their models.
arXiv Detail & Related papers (2020-10-26T13:57:20Z) - A Unified Dual-view Model for Review Summarization and Sentiment
Classification with Inconsistency Loss [51.448615489097236]
Acquiring accurate summarization and sentiment from user reviews is an essential component of modern e-commerce platforms.
We propose a novel dual-view model that jointly improves the performance of these two tasks.
Experiment results on four real-world datasets from different domains demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2020-06-02T13:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.