To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms
- URL: http://arxiv.org/abs/2508.16610v1
- Date: Wed, 13 Aug 2025 01:05:49 GMT
- Title: To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms
- Authors: AKM Bahalul Haque, A. K. M. Najmul Islam, Patrick Mikalef,
- Abstract summary: This paper investigates social media recommendations from an end user perspective.<n>We asked participants about the social media content suggestions, their comprehensibility, and explainability.<n>Our analysis shows users mostly require explanation whenever they encounter unfamiliar content.
- Score: 0.1274452325287335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI based social media recommendations have great potential to improve the user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being a black box creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the users perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis.
Related papers
- COMMUNITYNOTES: A Dataset for Exploring the Helpfulness of Fact-Checking Explanations [89.37527535663433]
We present a large-scale dataset of 104k posts with user-provided notes and helpfulness labels.<n>We propose a framework that automatically generates and improves reason definitions via automatic prompt optimization.<n>Our experiments show that the optimized definitions can improve both helpfulness and reason prediction.
arXiv Detail & Related papers (2025-10-28T05:28:47Z) - Context-Aware Visualization for Explainable AI Recommendations in Social Media: A Vision for User-Aligned Explanations [0.0]
We propose a user-segmented and context-aware explanation layer by proposing a visual explanation system with diverse explanation methods.<n>Our framework is the first to jointly adapt explanation style (visual vs. numeric) and granularity (expert vs. lay) inside a single pipeline.<n>A public pilot with 30 X users will validate its impact on decision-making and trust.
arXiv Detail & Related papers (2025-08-01T14:47:47Z) - Can User Feedback Help Issue Detection? An Empirical Study on a One-billion-user Online Service System [28.43595612060133]
We conduct an empirical study on 50,378,766 user feedback items from six real-world services in a one-billion-user online service system.<n>Our results show that a large proportion of user feedback provides irrelevant information about system issues.<n>We find severe issues that cannot be easily detected based solely on user feedback characteristics.
arXiv Detail & Related papers (2025-08-01T12:49:07Z) - How Does Users' App Knowledge Influence the Preferred Level of Detail and Format of Software Explanations? [2.423517761302909]
This study investigates factors influencing users' preferred level of detail and the form of an explanation in software.<n>Results indicate that users prefer moderately detailed explanations in short text formats.<n>Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors.
arXiv Detail & Related papers (2025-02-10T15:18:04Z) - Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations [35.77028281332307]
We propose Contribution-Aware Multimodal User Embedding (CAMUE) for social networks.
We show that our approach can provide personalized explainable predictions, automatically mitigating the impact of unreliable information.
Our work paves the way for more explainable, reliable, and effective social media user embedding.
arXiv Detail & Related papers (2024-09-04T02:17:32Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective [0.3069335774032178]
This study aims to investigate usercentric explainable AI and considered recommendation systems as the study context.
We conducted focus group interviews to collect qualitative data on the recommendation system.
Our findings reveal that end users want a non-technical and tailor-made explanation with on-demand supplementary information.
arXiv Detail & Related papers (2023-11-01T22:20:14Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.