To Know by the Company Words Keep and What Else Lies in the Vicinity
- URL: http://arxiv.org/abs/2205.00148v1
- Date: Sat, 30 Apr 2022 03:47:48 GMT
- Title: To Know by the Company Words Keep and What Else Lies in the Vicinity
- Authors: Jake Ryland Williams and Hunter Scott Heidenreich
- Abstract summary: We introduce an analytic model of the statistics learned by seminal algorithms, including GloVe and Word2Vec.
We derive -- to the best of our knowledge -- the first known solution to Word2Vec's softmax-optimized, skip-gram algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of state-of-the-art (SOTA) Natural Language Processing (NLP)
systems has steadily been establishing new techniques to absorb the statistics
of linguistic data. These techniques often trace well-known constructs from
traditional theories, and we study these connections to close gaps around key
NLP methods as a means to orient future work. For this, we introduce an
analytic model of the statistics learned by seminal algorithms (including GloVe
and Word2Vec), and derive insights for systems that use these algorithms and
the statistics of co-occurrence, in general. In this work, we derive -- to the
best of our knowledge -- the first known solution to Word2Vec's
softmax-optimized, skip-gram algorithm. This result presents exciting potential
for future development as a direct solution to a deep learning (DL) language
model's (LM's) matrix factorization. However, we use the solution to
demonstrate a seemingly-universal existence of a property that word vectors
exhibit and which allows for the prophylactic discernment of biases in data --
prior to their absorption by DL models. To qualify our work, we conduct an
analysis of independence, i.e., on the density of statistical dependencies in
co-occurrence models, which in turn renders insights on the distributional
hypothesis' partial fulfillment by co-occurrence statistics.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - Large Language Models are Effective Priors for Causal Graph Discovery [6.199818486385127]
Causal structure discovery from observations can be improved by integrating background knowledge provided by an expert to reduce the hypothesis space.
Recently, Large Language Models (LLMs) have begun to be considered as sources of prior information given the low cost of querying them relative to a human expert.
arXiv Detail & Related papers (2024-05-22T11:39:11Z) - LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language [35.84181171987974]
Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations.
We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from Large Language Models.
We demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions.
arXiv Detail & Related papers (2024-05-21T15:13:12Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Beyond the Black Box: A Statistical Model for LLM Reasoning and Inference [0.9898607871253774]
This paper introduces a novel Bayesian learning model to explain the behavior of Large Language Models (LLMs)
We develop a theoretical framework based on an ideal generative text model represented by a multinomial transition probability matrix with a prior, and examine how LLMs approximate this matrix.
arXiv Detail & Related papers (2024-02-05T16:42:10Z) - Surprisal Driven $k$-NN for Robust and Interpretable Nonparametric
Learning [1.4293924404819704]
We shed new light on the traditional nearest neighbors algorithm from the perspective of information theory.
We propose a robust and interpretable framework for tasks such as classification, regression, density estimation, and anomaly detection using a single model.
Our work showcases the architecture's versatility by achieving state-of-the-art results in classification and anomaly detection.
arXiv Detail & Related papers (2023-11-17T00:35:38Z) - Faithful Explanations of Black-box NLP Models Using LLM-generated
Counterfactuals [67.64770842323966]
Causal explanations of predictions of NLP systems are essential to ensure safety and establish trust.
Existing methods often fall short of explaining model predictions effectively or efficiently.
We propose two approaches for counterfactual (CF) approximation.
arXiv Detail & Related papers (2023-10-01T07:31:04Z) - MAUVE Scores for Generative Models: Theory and Practice [95.86006777961182]
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
We find that MAUVE can quantify the gaps between the distributions of human-written text and those of modern neural language models.
We demonstrate in the vision domain that MAUVE can identify known properties of generated images on par with or better than existing metrics.
arXiv Detail & Related papers (2022-12-30T07:37:40Z) - Testing Pre-trained Language Models' Understanding of Distributivity via
Causal Mediation Analysis [13.07356367140208]
We introduce DistNLI, a new diagnostic dataset for natural language inference.
We find that the extent of models' understanding is associated with model size and vocabulary size.
arXiv Detail & Related papers (2022-09-11T00:33:28Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.