Uncertainty over Uncertainty: Investigating the Assumptions,
Annotations, and Text Measurements of Economic Policy Uncertainty
- URL: http://arxiv.org/abs/2010.04706v1
- Date: Fri, 9 Oct 2020 17:50:29 GMT
- Title: Uncertainty over Uncertainty: Investigating the Assumptions,
Annotations, and Text Measurements of Economic Policy Uncertainty
- Authors: Katherine A. Keith, Christoph Teichmann, Brendan O'Connor, Edgar Meij
- Abstract summary: We examine an economic index that measures economic policy uncertainty from keyword occurrences in news.
We find some annotator disagreements of economic policy uncertainty can be attributed to ambiguity in language.
- Score: 12.787262921924953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods and applications are inextricably linked in science, and in
particular in the domain of text-as-data. In this paper, we examine one such
text-as-data application, an established economic index that measures economic
policy uncertainty from keyword occurrences in news. This index, which is shown
to correlate with firm investment, employment, and excess market returns, has
had substantive impact in both the private sector and academia. Yet, as we
revisit and extend the original authors' annotations and text measurements we
find interesting text-as-data methodological research questions: (1) Are
annotator disagreements a reflection of ambiguity in language? (2) Do
alternative text measurements correlate with one another and with measures of
external predictive validity? We find for this application (1) some annotator
disagreements of economic policy uncertainty can be attributed to ambiguity in
language, and (2) switching measurements from keyword-matching to supervised
machine learning classifiers results in low correlation, a concerning
implication for the validity of the index.
Related papers
- Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.
We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - Automatic detection of relevant information, predictions and forecasts in financial news through topic modelling with Latent Dirichlet Allocation [9.059679096341474]
We focus on the analysis of financial news to identify relevant text and, within that text, forecasts and predictions.
We propose a novel Natural Language Processing (NLP) system to assist investors in the detection of relevant financial events.
arXiv Detail & Related papers (2024-03-30T17:49:34Z) - Detection of Temporality at Discourse Level on Financial News by Combining Natural Language Processing and Machine Learning [8.504685056067144]
Finance-related news such as Bloomberg News, CNN Business and Forbes are valuable sources of real data for market screening systems.
We propose a novel system to detect the temporality of finance-related news at discourse level.
We have tested our system on a labelled dataset of finance-related news annotated by researchers with knowledge in the field.
arXiv Detail & Related papers (2024-03-30T16:40:10Z) - Goodhart's Law Applies to NLP's Explanation Benchmarks [57.26445915212884]
We critically examine two sets of metrics: the ERASER metrics (comprehensiveness and sufficiency) and the EVAL-X metrics.
We show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs.
Our results raise doubts about the ability of current metrics to guide explainability research, underscoring the need for a broader reassessment of what precisely these metrics are intended to capture.
arXiv Detail & Related papers (2023-08-28T03:03:03Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Beyond Model Interpretability: On the Faithfulness and Adversarial
Robustness of Contrastive Textual Explanations [2.543865489517869]
This work motivates textual counterfactuals by laying the ground for a novel evaluation scheme inspired by the faithfulness of explanations.
Experiments on sentiment analysis data show that the connectedness of counterfactuals to their original counterparts is not obvious in both models.
arXiv Detail & Related papers (2022-10-17T09:50:02Z) - On the Intrinsic and Extrinsic Fairness Evaluation Metrics for
Contextualized Language Representations [74.70957445600936]
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.
These metrics can be roughly categorized into two categories: 1) emphextrinsic metrics for evaluating fairness in downstream applications and 2) emphintrinsic metrics for estimating fairness in upstream language representation models.
arXiv Detail & Related papers (2022-03-25T22:17:43Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Economic Policy Uncertainty Using an Unsupervised Word
Embedding-based Method [0.0]
Economic Policy Uncertainty (EPU) is a critical indicator in economic studies, while it can be used to forecast a recession.
EPU index is computed by counting news articles containing pre-defined keywords related to policy-making and economy.
In this paper, we propose an unsupervised text mining method that uses word-embedding representation space to select relevant keywords.
arXiv Detail & Related papers (2021-05-10T19:34:14Z) - Bankruptcy prediction using disclosure text features [0.0]
This work proposes a new distress dictionary based on the sentences used by managers in explaining financial status.
It demonstrates the significant differences in linguistic features between bankrupt and non-bankrupt firms.
Using a large sample of 500 bankrupt firms, it builds predictive models and compares the performance against two dictionaries used in financial text analysis.
arXiv Detail & Related papers (2021-01-03T22:23:22Z) - Speakers Fill Lexical Semantic Gaps with Context [65.08205006886591]
We operationalise the lexical ambiguity of a word as the entropy of meanings it can take.
We find significant correlations between our estimate of ambiguity and the number of synonyms a word has in WordNet.
This suggests that, in the presence of ambiguity, speakers compensate by making contexts more informative.
arXiv Detail & Related papers (2020-10-05T17:19:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.