Fake Reviews Detection through Ensemble Learning
- URL: http://arxiv.org/abs/2006.07912v1
- Date: Sun, 14 Jun 2020 14:24:02 GMT
- Title: Fake Reviews Detection through Ensemble Learning
- Authors: Luis Gutierrez-Espinoza and Faranak Abri and Akbar Siami Namin and
Keith S. Jones and David R. W. Sears
- Abstract summary: Several machine learning-based approaches can automatically detect deceptive and fake reviews.
This paper evaluates the performance of ensemble learning-based approaches to identify bogus online information.
- Score: 1.609940380983903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Customers represent their satisfactions of consuming products by sharing
their experiences through the utilization of online reviews. Several machine
learning-based approaches can automatically detect deceptive and fake reviews.
Recently, there have been studies reporting the performance of ensemble
learning-based approaches in comparison to conventional machine learning
techniques. Motivated by the recent trends in ensemble learning, this paper
evaluates the performance of ensemble learning-based approaches to identify
bogus online information. The application of a number of ensemble
learning-based approaches to a collection of fake restaurant reviews that we
developed show that these ensemble learning-based approaches detect deceptive
information better than conventional machine learning algorithms.
Related papers
- Learner Attentiveness and Engagement Analysis in Online Education Using Computer Vision [3.449808359602251]
This research presents a computer vision-based approach to analyze and quantify learners' attentiveness, engagement, and other affective states within online learning scenarios.
A machine learning-based algorithm is developed on top of the classification model that outputs a comprehensive attentiveness index of the learners.
An end-to-end pipeline is proposed through which learners' live video feed is processed, providing detailed attentiveness analytics of the learners to the instructors.
arXiv Detail & Related papers (2024-11-30T10:54:08Z) - A review on discriminative self-supervised learning methods [6.24302896438145]
Self-supervised learning has emerged as a method to extract robust features from unlabeled data.
This paper provides a review of discriminative approaches of self-supervised learning within the domain of computer vision.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Learning Representations for New Sound Classes With Continual
Self-Supervised Learning [30.35061954854764]
We present a self-supervised learning framework for continually learning representations for new sound classes.
We show that representations learned with the proposed method generalize better and are less susceptible to catastrophic forgetting.
arXiv Detail & Related papers (2022-05-15T22:15:21Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - KnowledgeCheckR: Intelligent Techniques for Counteracting Forgetting [52.623349754076024]
We provide an overview of the recommendation approaches integrated in KnowledgeCheckR.
Examples thereof are utility-based recommendation that helps to identify learning contents to be repeated in the future, collaborative filtering approaches that help to implement session-based recommendation, and content-based recommendation that supports intelligent question answering.
arXiv Detail & Related papers (2021-02-15T20:06:28Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.