An attention economy model of co-evolution between content quality and audience selectivity
- URL: http://arxiv.org/abs/2602.06437v1
- Date: Fri, 06 Feb 2026 07:07:57 GMT
- Title: An attention economy model of co-evolution between content quality and audience selectivity
- Authors: Masaki Chujyo, Isamu Okada, Hitoshi Yamamoto, Dongwoo Lim, Fujio Toriumi,
- Abstract summary: Human attention has become a scarce and strategically contested resource in digital environments.<n>We develop a minimal mathematical framework to explain how content quality and audience attention coevolve under limited attention capacity.
- Score: 0.050745801979964804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human attention has become a scarce and strategically contested resource in digital environments. Content providers increasingly engage in excessive competition for visibility, often prioritizing attention-grabbing tactics over substantive quality. Despite extensive empirical evidence, however, there is a lack of theoretical models that explain the fundamental dynamics of the attention economy. Here, we develop a minimal mathematical framework to explain how content quality and audience attention coevolve under limited attention capacity. Using an evolutionary game approach, we model strategic feedback between providers, who decide how much effort to invest in production, and consumers, who choose whether to search selectively for high-quality content or to engage passively. Analytical and numerical results reveal three characteristic regimes of content dynamics: collapse, boundary, and coexistence. The transitions between these regimes depend on how effectively audiences can distinguish content quality. When audience discriminability is weak, both selective attention and high-quality production vanish, leading to informational collapse. When discriminability is sufficient and incentives are well aligned, high- and low-quality content dynamically coexist through feedback between audience selectivity and providers' effort. These findings identify two key conditions for sustaining a healthy information ecosystem: adequate discriminability among audiences and sufficient incentives for high-effort creation. The model provides a theoretical foundation for understanding how institutional and platform designs can prevent the degradation of content quality in the attention economy.
Related papers
- MMPersuade: A Dataset and Evaluation Framework for Multimodal Persuasion [73.99171322670772]
Large Vision-Language Models (LVLMs) are increasingly deployed in domains such as shopping, health, and news.<n> MMPersuade provides a unified framework for systematically studying multimodal persuasion dynamics in LVLMs.
arXiv Detail & Related papers (2025-10-26T17:39:21Z) - When or What? Understanding Consumer Engagement on Digital Platforms [1.593326304030926]
This study applies Latent Dirichlet Allocation modeling to a large corpus of TED Talks.<n>By comparing the thematic supply of creators with the demand expressed in audience engagement, we identify persistent mismatches between producer offerings and consumer preferences.
arXiv Detail & Related papers (2025-10-12T06:53:57Z) - Algorithmic Fairness amid Social Determinants: Reflection, Characterization, and Approach [19.881116751039613]
Social determinants are variables that, while not directly pertaining to any specific individual, capture key aspects of contexts and environments.<n>Previous algorithmic fairness literature has primarily focused on sensitive attributes, often overlooking the role of social determinants.
arXiv Detail & Related papers (2025-08-10T23:55:16Z) - Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation [5.907945985868999]
This study investigates the extent to which annotator demographic features influence labeling decisions compared to text content.<n>Using a Generalized Linear Mixed Model, we quantify this inf luence, finding that demographic factors account for a minor fraction ( 8%) of the observed variance.<n>We then assess the reliability of Generative AI (GenAI) models as annotators, specifically evaluating if guiding them with demographic personas improves alignment with human judgments.
arXiv Detail & Related papers (2025-07-17T14:00:13Z) - Modeling Beyond MOS: Quality Assessment Models Must Integrate Context, Reasoning, and Multimodality [45.34252727738116]
Mean Opinion Score (MOS) is no longer sufficient as the sole supervisory signal for multimedia quality assessment models.<n>By reframing quality assessment as a contextual, explainable, and multimodal modeling task, we aim to catalyze a shift toward more robust, human-aligned, and trustworthy evaluation systems.
arXiv Detail & Related papers (2025-05-26T08:52:02Z) - Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks [4.441767341563709]
This review explores the trustworthiness of multimodal artificial intelligence (AI) systems, specifically focusing on vision-language tasks.<n>It addresses challenges related to fairness, transparency, and ethical implications in these systems.
arXiv Detail & Related papers (2025-04-14T21:10:25Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [68.62012304574012]
multimodal generative models have sparked critical discussions on their reliability, fairness and potential for misuse.<n>We propose an evaluation framework to assess model reliability by analyzing responses to global and local perturbations in the embedding space.<n>Our method lays the groundwork for detecting unreliable, bias-injected models and tracing the provenance of embedded biases.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts [23.97104853350071]
We empirically investigate emphvisual fairness in several mainstream vision-language models (LVLMs)<n>Our fairness evaluation framework employs direct and single-choice question prompt on visual question-answering/classification tasks.<n>We propose a potential multi-modal Chain-of-thought (CoT) based strategy for unfairness mitigation, applicable to both open-source and closed-source LVLMs.
arXiv Detail & Related papers (2024-06-25T23:11:39Z) - Incentivizing High-Quality Content in Online Recommender Systems [80.19930280144123]
We study the game between producers and analyze the content created at equilibrium.
We show that standard online learning algorithms, such as Hedge and EXP3, unfortunately incentivize producers to create low-quality content.
arXiv Detail & Related papers (2023-06-13T00:55:10Z) - Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video
Quality Assessment [54.31355080688127]
We introduce a text-prompted Semantic Affinity Quality Index (SAQI) and its localized version (SAQI-Local) using Contrastive Language-Image Pre-training (CLIP)
BVQI-Local demonstrates unprecedented performance, surpassing existing zero-shot indices by at least 24% on all datasets.
We conduct comprehensive analyses to investigate different quality concerns of distinct indices, demonstrating the effectiveness and rationality of our design.
arXiv Detail & Related papers (2023-04-28T08:06:05Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Personality-Driven Social Multimedia Content Recommendation [68.46899477180837]
We investigate the impact of human personality traits on the content recommendation model by applying a novel personality-driven multi-view content recommender system.
Our experimental results and real-world case study demonstrate not just PersiC's ability to perform efficient human personality-driven multi-view content recommendation, but also allow for actionable digital ad strategy recommendations.
arXiv Detail & Related papers (2022-07-25T14:37:18Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.