Objectifying the Subjective: Cognitive Biases in Topic Interpretations
- URL: http://arxiv.org/abs/2507.19117v1
- Date: Fri, 25 Jul 2025 09:51:42 GMT
- Title: Objectifying the Subjective: Cognitive Biases in Topic Interpretations
- Authors: Swapnil Hingmire, Ze Shi Li, Shiyu, Zeng, Ahmed Musa Awon, Luiz Franciscatto Guerra, Neil Ernst,
- Abstract summary: We propose constructs of topic quality and ask users to assess them in the context of a topic.<n>We use reflexive thematic analysis to identify themes of topic interpretations from rationales.<n>We propose a theory of topic interpretation based on the anchoring-and-adjustment: users anchor on salient words and make semantic adjustments to arrive at an interpretation.
- Score: 19.558609775890673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretation of topics is crucial for their downstream applications. State-of-the-art evaluation measures of topic quality such as coherence and word intrusion do not measure how much a topic facilitates the exploration of a corpus. To design evaluation measures grounded on a task, and a population of users, we do user studies to understand how users interpret topics. We propose constructs of topic quality and ask users to assess them in the context of a topic and provide rationale behind evaluations. We use reflexive thematic analysis to identify themes of topic interpretations from rationales. Users interpret topics based on availability and representativeness heuristics rather than probability. We propose a theory of topic interpretation based on the anchoring-and-adjustment heuristic: users anchor on salient words and make semantic adjustments to arrive at an interpretation. Topic interpretation can be viewed as making a judgment under uncertainty by an ecologically rational user, and hence cognitive biases aware user models and evaluation frameworks are needed.
Related papers
- TopicImpact: Improving Customer Feedback Analysis with Opinion Units for Topic Modeling and Star-Rating Prediction [0.29021840487584505]
We improve the extraction of insights from customer reviews by restructuring the topic modelling pipeline to operate on opinion units.<n>The result is a heightened performance of the subsequent topic modeling, leading to coherent and interpretable topics.<n>By correlating the topics and sentiments with business metrics, such as star ratings, we can gain insights on how specific customer concerns impact business outcomes.
arXiv Detail & Related papers (2025-07-16T09:19:26Z) - topicwizard -- a Modern, Model-agnostic Framework for Topic Model Visualization and Interpretation [0.0]
We introduce topicwizard, a framework for model-agnostic topic model interpretation.<n>It helps users examine the complex semantic relations between documents, words and topics learned by topic models.
arXiv Detail & Related papers (2025-05-19T12:19:01Z) - Contextualized Evaluations: Judging Language Model Responses to Underspecified Queries [85.81295563405433]
We present a protocol that synthetically constructs context surrounding an under-specified query and provides it during evaluation.<n>We find that the presence of context can 1) alter conclusions drawn from evaluation, even flipping benchmark rankings between model pairs, 2) nudge evaluators to make fewer judgments based on surface-level criteria, like style, and 3) provide new insights about model behavior across diverse contexts.
arXiv Detail & Related papers (2024-11-11T18:58:38Z) - Can Interpretability Layouts Influence Human Perception of Offensive Sentences? [1.474723404975345]
This paper conducts a user study to assess whether three machine learning (ML) interpretability layouts can influence participants' views when evaluating sentences containing hate speech.
arXiv Detail & Related papers (2024-03-01T13:25:54Z) - Interpretation modeling: Social grounding of sentences by reasoning over
their implicit moral judgments [24.133419857271505]
Single gold-standard interpretations rarely exist, challenging conventional assumptions in natural language processing.
This work introduces the interpretation modeling (IM) task which involves modeling several interpretations of a sentence's underlying semantics.
A first-of-its-kind IM dataset is curated to support experiments and analyses.
arXiv Detail & Related papers (2023-11-27T07:50:55Z) - Natural Language Decompositions of Implicit Content Enable Better Text Representations [52.992875653864076]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.<n>We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.<n>Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - You Are What You Talk About: Inducing Evaluative Topics for Personality
Analysis [0.0]
evaluative language data has become more accessible with social media's rapid growth.
We introduce the notion of evaluative topics, obtained by applying topic models to pre-filtered evaluative text.
We then link evaluative topics to individual text authors to build their evaluative profiles.
arXiv Detail & Related papers (2023-02-01T15:04:04Z) - On the Faithfulness Measurements for Model Interpretations [100.2730234575114]
Post-hoc interpretations aim to uncover how natural language processing (NLP) models make predictions.
To tackle these issues, we start with three criteria: the removal-based criterion, the sensitivity of interpretations, and the stability of interpretations.
Motivated by the desideratum of these faithfulness notions, we introduce a new class of interpretation methods that adopt techniques from the adversarial domain.
arXiv Detail & Related papers (2021-04-18T09:19:44Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - A computational model implementing subjectivity with the 'Room Theory'.
The case of detecting Emotion from Text [68.8204255655161]
This work introduces a new method to consider subjectivity and general context dependency in text analysis.
By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text.
arXiv Detail & Related papers (2020-05-12T21:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.