Persistent Homology of Topic Networks for the Prediction of Reader Curiosity
- URL: http://arxiv.org/abs/2506.11095v1
- Date: Fri, 06 Jun 2025 08:11:10 GMT
- Title: Persistent Homology of Topic Networks for the Prediction of Reader Curiosity
- Authors: Manuel D. S. Hopp, Vincent Labatut, Arthur Amalvy, Richard Dufour, Hannah Stone, Hayley Jach, Kou Murayama,
- Abstract summary: We introduce a framework that models reader curiosity by quantifying semantic information gaps within a text's semantic structure.<n>We experimentally show that they significantly improve curiosity prediction compared to a baseline model.<n>This pipeline offers a new computational method for analyzing text structure and its relation to reader engagement.
- Score: 5.086764288024297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reader curiosity, the drive to seek information, is crucial for textual engagement, yet remains relatively underexplored in NLP. Building on Loewenstein's Information Gap Theory, we introduce a framework that models reader curiosity by quantifying semantic information gaps within a text's semantic structure. Our approach leverages BERTopic-inspired topic modeling and persistent homology to analyze the evolving topology (connected components, cycles, voids) of a dynamic semantic network derived from text segments, treating these features as proxies for information gaps. To empirically evaluate this pipeline, we collect reader curiosity ratings from participants (n = 49) as they read S. Collins's ''The Hunger Games'' novel. We then use the topological features from our pipeline as independent variables to predict these ratings, and experimentally show that they significantly improve curiosity prediction compared to a baseline model (73% vs. 30% explained deviance), validating our approach. This pipeline offers a new computational method for analyzing text structure and its relation to reader engagement.
Related papers
- A Comprehensive Survey on Self-Interpretable Neural Networks [36.0575431131253]
Self-interpretable neural networks inherently reveal the prediction rationale through the model structures.<n>We first collect and review existing works on self-interpretable neural networks and provide a structured summary of their methodologies.<n>We also present concrete, visualized examples of model explanations and discuss their applicability across diverse scenarios.
arXiv Detail & Related papers (2025-01-26T18:50:16Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Intrinsic User-Centric Interpretability through Global Mixture of Experts [31.738009841932374]
InterpretCC is a family of intrinsically interpretable neural networks that optimize for ease of human understanding and explanation faithfulness.<n>We show that InterpretCC explanations are found to have higher actionability and usefulness over other intrinsically interpretable approaches.
arXiv Detail & Related papers (2024-02-05T11:55:50Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z) - Local Explanation of Dialogue Response Generation [77.68077106724522]
Local explanation of response generation (LERG) is proposed to gain insights into the reasoning process of a generation model.
LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response.
Our results show that our method consistently improves other widely used methods on proposed automatic- and human- evaluation metrics for this new task by 4.4-12.8%.
arXiv Detail & Related papers (2021-06-11T17:58:36Z) - Generating Diversified Comments via Reader-Aware Topic Modeling and
Saliency Detection [25.16392119801612]
We propose a reader-aware topic modeling and saliency information detection framework to enhance the quality of generated comments.
For reader-aware topic modeling, we design a variational generative clustering algorithm for latent semantic learning and topic mining from reader comments.
For saliency information detection, we introduce Bernoulli distribution estimating on news content to select saliency information.
arXiv Detail & Related papers (2021-02-13T03:50:31Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Generating Hierarchical Explanations on Text Classification via Feature
Interaction Detection [21.02924712220406]
We build hierarchical explanations by detecting feature interactions.
Such explanations visualize how words and phrases are combined at different levels of the hierarchy.
Experiments show the effectiveness of the proposed method in providing explanations both faithful to models and interpretable to humans.
arXiv Detail & Related papers (2020-04-04T20:56:37Z) - Towards Accurate Scene Text Recognition with Semantic Reasoning Networks [52.86058031919856]
We propose a novel end-to-end trainable framework named semantic reasoning network (SRN) for accurate scene text recognition.
GSRM is introduced to capture global semantic context through multi-way parallel transmission.
Results on 7 public benchmarks, including regular text, irregular text and non-Latin long text, verify the effectiveness and robustness of the proposed method.
arXiv Detail & Related papers (2020-03-27T09:19:25Z) - Short Text Classification via Knowledge powered Attention with
Similarity Matrix based CNN [6.6723692875904375]
We propose a knowledge powered attention with similarity matrix based convolutional neural network (KASM) model.
We use knowledge graph (KG) to enrich the semantic representation of short text, specially, the information of parent-entity is introduced in our model.
For the purpose of measuring the importance of knowledge, we introduce the attention mechanisms to choose the important information.
arXiv Detail & Related papers (2020-02-09T12:08:43Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.