Detecting and Mitigating Bias in Algorithms Used to Disseminate Information in Social Networks
- URL: http://arxiv.org/abs/2405.12764v2
- Date: Wed, 30 Oct 2024 10:42:50 GMT
- Title: Detecting and Mitigating Bias in Algorithms Used to Disseminate Information in Social Networks
- Authors: Vedran Sekara, Ivan Dotu, Manuel Cebrian, Esteban Moro, Manuel Garcia-Herranz,
- Abstract summary: Influence algorithms are used to identify sets of influencers.
We show that seeding information using these methods creates information gaps.
We devise a multi-objective algorithm which maximizes influence and information equity.
- Score: 0.03883607294385062
- License:
- Abstract: Social connections are conduits through which individuals communicate, information propagates, and diseases spread. Identifying individuals who are more likely to adopt ideas and spread them is essential in order to develop effective information campaigns, maximize the reach of resources, and fight epidemics. Influence maximization algorithms are used to identify sets of influencers. Based on extensive computer simulations on synthetic and ten diverse real-world social networks we show that seeding information using these methods creates information gaps. Our results show that these algorithms select influencers who do not disseminate information equitably, threatening to create an increasingly unequal society. To overcome this issue we devise a multi-objective algorithm which maximizes influence and information equity. Our results demonstrate it is possible to reduce vulnerability at a relatively low trade-off with respect to spread. This highlights that in our search for maximizing information we do not need to compromise on information equality.
Related papers
- Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.
To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.
Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Online Auditing of Information Flow [4.557963624437785]
We consider the problem of online auditing of information flow/propagation with the goal of classifying news items as fake or genuine.
We propose a probabilistic Markovian information spread model over networks modeled by graphs.
We find the optimal detection algorithm minimizing the aforementioned risk and prove several statistical guarantees.
arXiv Detail & Related papers (2023-10-23T06:03:55Z) - Fair Information Spread on Social Networks with Community Structure [2.9613974659787132]
Influence maximiza- tion (IM) algorithms aim to identify individuals who will generate the greatest spread through the social network if provided with information.
This work relies on fitting a model to the social network which is then used to determine a seed allocation strategy for optimal fair information spread.
arXiv Detail & Related papers (2023-05-15T16:51:18Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - CLAIM: Curriculum Learning Policy for Influence Maximization in Unknown
Social Networks [14.695979686066062]
We propose CLAIM - Curriculum LeArning Policy for Influence Maximization to improve the sample efficiency of RL methods.
We conduct experiments on real-world datasets and show that our approach can outperform the current best approach.
arXiv Detail & Related papers (2021-07-08T04:52:50Z) - Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics [6.08461198240039]
This study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD) to predict health misinformation transmission in social media.
We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos.
Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
arXiv Detail & Related papers (2020-12-21T15:49:19Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.