Vague Knowledge: Evidence from Analyst Reports
- URL: http://arxiv.org/abs/2505.12269v3
- Date: Sat, 24 May 2025 22:50:59 GMT
- Title: Vague Knowledge: Evidence from Analyst Reports
- Authors: Kerry Xiao, Amy Zang,
- Abstract summary: We argue that language, with differing ability to convey vague information, plays an important but less-known role in representing subjective expectations.<n>We find that in their reports, analysts include useful information in linguistic expressions but not numerical forecasts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: People in the real world often possess vague knowledge of future payoffs, for which quantification is not feasible or desirable. We argue that language, with differing ability to convey vague information, plays an important but less-known role in representing subjective expectations. Empirically, we find that in their reports, analysts include useful information in linguistic expressions but not numerical forecasts. Specifically, the textual tone of analyst reports has predictive power for forecast errors and subsequent revisions in numerical forecasts, and this relation becomes stronger when analyst's language is vaguer, when uncertainty is higher, and when analysts are busier. Overall, our theory and evidence suggest that some useful information is vaguely known and only communicated through language.
Related papers
- Anthropomimetic Uncertainty: What Verbalized Uncertainty in Language Models is Missing [66.04926909181653]
We argue for anthropomimetic uncertainty, meaning that intuitive and trustworthy uncertainty communication requires a degree of linguistic authenticity and personalization to the user.<n>We conclude by pointing out unique factors in human-machine communication of uncertainty and deconstruct the data biases that influence machine uncertainty communication.
arXiv Detail & Related papers (2025-07-11T14:07:22Z) - Analyzing the temporal dynamics of linguistic features contained in misinformation [0.0]
This study uses natural language processing to analyze PolitiFact statements spanning between 2010 and 2024.<n>The results show that statement sentiment has decreased significantly over time, reflecting a generally more negative tone in PolitiFact statements.
arXiv Detail & Related papers (2025-02-27T14:15:43Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data [6.817247544942709]
We propose a credibility assessment solution for financial creators in social media that combines Natural Language Processing and Machine Learning.
The reputation of the contributors is assessed by automatically classifying their forecasts on asset values by type and verifying these predictions with actual market data.
The system provides natural language explanations of its decisions based on a model-agnostic analysis of relevant features.
arXiv Detail & Related papers (2024-06-17T08:08:03Z) - What Kinds of Tokens Benefit from Distant Text? An Analysis on Long Context Language Modeling [27.75379365518913]
We study which kinds of words benefit more from long contexts in language models.
We find that content words (e.g., nouns, adjectives) and the initial tokens of words benefit the most.
We also observe that language models become more confident with longer contexts, resulting in sharper probability distributions.
arXiv Detail & Related papers (2024-06-17T06:07:29Z) - Testing the Predictions of Surprisal Theory in 11 Languages [77.45204595614]
We investigate the relationship between surprisal and reading times in eleven different languages.<n>By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.
arXiv Detail & Related papers (2023-07-07T15:37:50Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Do Language Embeddings Capture Scales? [54.1633257459927]
We show that pretrained language models capture a significant amount of information about the scalar magnitudes of objects.
We identify contextual information in pre-training and numeracy as two key factors affecting their performance.
arXiv Detail & Related papers (2020-10-11T21:11:09Z) - Measuring Forecasting Skill from Text [15.795144936579627]
We explore connections between the language people use to describe their predictions and their forecasting skill.
We present a number of linguistic metrics which are computed over text associated with people's predictions about the future.
We demonstrate that it is possible to accurately predict forecasting skill using a model that is based solely on language.
arXiv Detail & Related papers (2020-06-12T19:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.