Estimating Topic Exposure for Under-Represented Users on Social Media
- URL: http://arxiv.org/abs/2208.03796v1
- Date: Sun, 7 Aug 2022 19:37:41 GMT
- Title: Estimating Topic Exposure for Under-Represented Users on Social Media
- Authors: Mansooreh Karami, Ahmadreza Mosallanezhad, Paras Sheth, and Huan Liu
- Abstract summary: This work focuses on highlighting the contributions of the engagers in the observed data.
The first step in behavioral analysis of these users is to find the topics they are exposed to but did not engage with.
We propose a novel framework that aids in identifying these users and estimates their topic exposure.
- Score: 25.963970325207892
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Online Social Networks (OSNs) facilitate access to a variety of data allowing
researchers to analyze users' behavior and develop user behavioral analysis
models. These models rely heavily on the observed data which is usually biased
due to the participation inequality. This inequality consists of three groups
of online users: the lurkers - users that solely consume the content, the
engagers - users that contribute little to the content creation, and the
contributors - users that are responsible for creating the majority of the
online content. Failing to consider the contribution of all the groups while
interpreting population-level interests or sentiments may yield biased results.
To reduce the bias induced by the contributors, in this work, we focus on
highlighting the engagers' contributions in the observed data as they are more
likely to contribute when compared to lurkers, and they comprise a bigger
population as compared to the contributors. The first step in behavioral
analysis of these users is to find the topics they are exposed to but did not
engage with. To do so, we propose a novel framework that aids in identifying
these users and estimates their topic exposure. The exposure estimation
mechanism is modeled by incorporating behavioral patterns from similar
contributors as well as users' demographic and profile information.
Related papers
- Authenticity and exclusion: social media algorithms and the dynamics of belonging in epistemic communities [0.8287206589886879]
This paper examines how social media platforms and their recommendation algorithms shape the professional visibility and opportunities of researchers from minority groups.
Using agent-based simulations, we uncover three key patterns: First, these algorithms disproportionately harm the professional visibility of researchers from minority groups.
Second, within these minority groups, the algorithms result in greater visibility for users who more closely resemble the majority group, incentivizing assimilation at the cost of professional invisibility.
arXiv Detail & Related papers (2024-07-11T14:36:58Z) - Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use [6.794366017852433]
This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users.
We developed a data collection tool that prioritizes participant consent and includes an experimental study design.
Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing.
arXiv Detail & Related papers (2024-04-19T20:45:19Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Causal Disentanglement with Network Information for Debiased
Recommendations [34.698181166037564]
Recent research proposes to debias by modeling a recommender system from a causal perspective.
The critical challenge in this setting is accounting for the hidden confounders.
We propose to leverage network information (i.e., user-social and user-item networks) to better approximate hidden confounders.
arXiv Detail & Related papers (2022-04-14T20:55:11Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Learning User Embeddings from Temporal Social Media Data: A Survey [15.324014759254915]
We survey representative work on learning a concise latent user representation (a.k.a. user embedding) that can capture the main characteristics of a social media user.
The learned user embeddings can later be used to support different downstream user analysis tasks such as personality modeling, suicidal risk assessment and purchase decision prediction.
arXiv Detail & Related papers (2021-05-17T16:22:43Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.