An Approach for Time-aware Domain-based Social Influence Prediction
- URL: http://arxiv.org/abs/2001.07838v1
- Date: Sun, 19 Jan 2020 10:39:37 GMT
- Title: An Approach for Time-aware Domain-based Social Influence Prediction
- Authors: Bilal Abu-Salih, Kit Yan Chan, Omar Al-Kadi, Marwan Al-Tawil, Pornpit
Wongthongtham, Tomayess Issa, Heba Saadeh, Malak Al-Hassan, Bushra Bremie,
Abdulaziz Albahlal
- Abstract summary: This paper presents an approach incorporates semantic analysis and machine learning modules to measure and predict users' trustworthiness.
The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.
- Score: 4.753874889216745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Social Networks(OSNs) have established virtual platforms enabling
people to express their opinions, interests and thoughts in a variety of
contexts and domains, allowing legitimate users as well as spammers and other
untrustworthy users to publish and spread their content. Hence, the concept of
social trust has attracted the attention of information processors/data
scientists and information consumers/business firms. One of the main reasons
for acquiring the value of Social Big Data (SBD) is to provide frameworks and
methodologies using which the credibility of OSNs users can be evaluated. These
approaches should be scalable to accommodate large-scale social data. Hence,
there is a need for well comprehending of social trust to improve and expand
the analysis process and inferring the credibility of SBD. Given the exposed
environment's settings and fewer limitations related to OSNs, the medium allows
legitimate and genuine users as well as spammers and other low trustworthy
users to publish and spread their content. Hence, this paper presents an
approach incorporates semantic analysis and machine learning modules to measure
and predict users' trustworthiness in numerous domains in different time
periods. The evaluation of the conducted experiment validates the applicability
of the incorporated machine learning techniques to predict highly trustworthy
domain-based users.
Related papers
- Evaluating Cultural and Social Awareness of LLM Web Agents [113.49968423990616]
We introduce CASA, a benchmark designed to assess large language models' sensitivity to cultural and social norms.
Our approach evaluates LLM agents' ability to detect and appropriately respond to norm-violating user queries and observations.
Experiments show that current LLMs perform significantly better in non-agent environments.
arXiv Detail & Related papers (2024-10-30T17:35:44Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data [6.817247544942709]
We propose a credibility assessment solution for financial creators in social media that combines Natural Language Processing and Machine Learning.
The reputation of the contributors is assessed by automatically classifying their forecasts on asset values by type and verifying these predictions with actual market data.
The system provides natural language explanations of its decisions based on a model-agnostic analysis of relevant features.
arXiv Detail & Related papers (2024-06-17T08:08:03Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users [4.609388510200741]
Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
arXiv Detail & Related papers (2021-07-16T17:39:32Z) - Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues [2.2133187119466116]
Credibility, also referred as believability, is a quality perceived by individuals, who are not always able to discern, with their own cognitive capacities, genuine information from fake one.
Several approaches have been proposed to automatically assess credibility in social media.
arXiv Detail & Related papers (2020-01-26T15:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.