Impact of Sentiment Analysis in Fake Review Detection
- URL: http://arxiv.org/abs/2212.08995v1
- Date: Sun, 18 Dec 2022 03:17:47 GMT
- Title: Impact of Sentiment Analysis in Fake Review Detection
- Authors: Amira Yousif and James Buckley
- Abstract summary: We propose developing an initial research paper for investigating fake reviews by using sentiment analysis.
Ten research papers are identified that show fake reviews, and they discuss currently available solutions for predicting or detecting fake reviews.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fake review identification is an important topic and has gained the interest
of experts all around the world. Identifying fake reviews is challenging for
researchers, and there are several primary challenges to fake review detection.
We propose developing an initial research paper for investigating fake reviews
by using sentiment analysis. Ten research papers are identified that show fake
reviews, and they discuss currently available solutions for predicting or
detecting fake reviews. They also show the distribution of fake and truthful
reviews through the analysis of sentiment. We summarize and compare previous
studies related to fake reviews. We highlight the most significant challenges
in the sentiment evaluation process and demonstrate that there is a significant
impact on sentiment scores used to identify fake feedback.
Related papers
- A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews
on Social Media [77.34726150561087]
We propose to leverage the high-quality elite Yelp reviews to generate fake reviews from the OpenAI GPT review creator.
We apply the model to predict non-elite reviews and identify the patterns across several dimensions.
We show that social media platforms are continuously challenged by machine-generated fake reviews.
arXiv Detail & Related papers (2023-02-10T19:40:10Z) - Fake or Genuine? Contextualised Text Representation for Fake Review
Detection [0.4724825031148411]
This paper proposes a new ensemble model that employs transformer architecture to discover the hidden patterns in a sequence of fake reviews and detect them precisely.
The experimental results using semi-real benchmark datasets showed the superiority of the proposed model over state-of-the-art models.
arXiv Detail & Related papers (2021-12-29T00:54:47Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Fact or Factitious? Contextualized Opinion Spam Detection [9.415901312074336]
We apply a number machine learning approaches found to be effective, and introduce our own approach by fine-tuning state of the art contextualised embeddings.
The results we obtain show the potential of contextualised embeddings for fake review detection, and lay the groundwork for future research in this area.
arXiv Detail & Related papers (2020-10-29T00:59:06Z) - ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis [62.76038841302741]
We build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
Experimental results show that our review score predictor reaches 71.4%-100% accuracy.
Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time.
arXiv Detail & Related papers (2020-10-13T02:17:58Z) - Fake Reviews Detection through Analysis of Linguistic Features [1.609940380983903]
This paper explores a natural language processing approach to identify fake reviews.
We study 15 linguistic features for distinguishing fake and trustworthy online reviews.
We were able to discriminate fake from real reviews with high accuracy using these linguistic features.
arXiv Detail & Related papers (2020-10-08T21:16:30Z) - How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements [8.471274313213092]
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
arXiv Detail & Related papers (2020-05-25T16:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.