Impact of Fake News on Social Media Towards Public Users of Different Age Groups
- URL: http://arxiv.org/abs/2411.05638v1
- Date: Fri, 08 Nov 2024 15:32:20 GMT
- Title: Impact of Fake News on Social Media Towards Public Users of Different Age Groups
- Authors: Kahlil bin Abdul Hakim, Sathishkumar Veerappampalayam Easwaramoorthy,
- Abstract summary: This study examines how fake news affects social media users across a range of age groups.
The paper evaluates various machine learning models for their efficacy in identifying and categorizing fake news.
- Score: 0.0
- License:
- Abstract: This study examines how fake news affects social media users across a range of age groups and how machine learning (ML) and artificial intelligence (AI) can help reduce the spread of false information. The paper evaluates various machine learning models for their efficacy in identifying and categorizing fake news and examines current trends in the spread of fake news, including deepfake technology. The study assesses four models using a Kaggle dataset: Random Forest, Support Vector Machine (SVM), Neural Networks, and Logistic Regression. The results show that SVM and neural networks perform better than other models, with accuracies of 93.29% and 93.69%, respectively. The study also emphasises how people in the elder age group diminished capacity for critical analysis of news content makes them more susceptible to disinformation. Natural language processing (NLP) and deep learning approaches have the potential to improve the accuracy of false news detection. Biases in AI and ML models and difficulties in identifying information generated by AI continue to be major problems in spite of the developments. The study recommends that datasets be expanded to encompass a wider range of languages and that detection algorithms be continuously improved to keep up with the latest advancements in disinformation tactics. In order to combat fake news and promote an informed and resilient society, this study emphasizes the value of cooperative efforts between AI researchers, social media platforms, and governments.
Related papers
- Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI [44.21078435758592]
Misinformation can spread quickly due to the ease of creating and disseminating content.
Traditional approaches to fake news detection often rely solely on content-based features.
We propose a comprehensive approach that integrates social context-based features with news content features.
arXiv Detail & Related papers (2024-10-03T15:49:35Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Development of Fake News Model using Machine Learning through Natural
Language Processing [0.7120858995754653]
We use machine learning algorithms and for identification of fake news.
Simple classification is not completely correct in fake news detection.
With the integration of machine learning and text-based processing, we can detect fake news.
arXiv Detail & Related papers (2022-01-19T09:26:15Z) - SOK: Fake News Outbreak 2021: Can We Stop the Viral Spread? [5.64512235559998]
Social Networks' omnipresence and ease of use has revolutionized the generation and distribution of information in today's world.
Unlike traditional media channels, social networks facilitate faster and wider spread of disinformation and misinformation.
Viral spread of false information has serious implications on the behaviors, attitudes and beliefs of the public.
arXiv Detail & Related papers (2021-05-22T09:26:13Z) - Adversarial Active Learning based Heterogeneous Graph Neural Network for
Fake News Detection [18.847254074201953]
We propose a novel fake news detection framework, namely Adversarial Active Learning-based Heterogeneous Graph Neural Network (AA-HGNN)
AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.
Experiments with two real-world fake news datasets show that our model can outperform text-based models and other graph-based models.
arXiv Detail & Related papers (2021-01-27T05:05:25Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.