ReFRAME or Remain: Unsupervised Lexical Semantic Change Detection with Frame Semantics
- URL: http://arxiv.org/abs/2602.04514v2
- Date: Mon, 09 Feb 2026 09:44:56 GMT
- Title: ReFRAME or Remain: Unsupervised Lexical Semantic Change Detection with Frame Semantics
- Authors: Bach Phan-Tat, Kris Heylen, Dirk Geeraerts, Stefano De Pascale, Dirk Speelman,
- Abstract summary: We develop a new method for detecting semantic change based on frame semantics.<n>We show that this method is effective for detecting semantic change and can even outperform many distributional semantic models.
- Score: 1.1340133299604382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of contemporary computational methods for lexical semantic change (LSC) detection are based on neural embedding distributional representations. Although these models perform well on LSC benchmarks, their results are often difficult to interpret. We explore an alternative approach that relies solely on frame semantics. We show that this method is effective for detecting semantic change and can even outperform many distributional semantic models. Finally, we present a detailed quantitative and qualitative analysis of its predictions, demonstrating that they are both plausible and highly interpretable
Related papers
- Transparent Semantic Change Detection with Dependency-Based Profiles [1.1340133299604382]
We investigate an alternative method which relies purely on dependency co-occurrence patterns of words.<n>We demonstrate that it is effective for semantic change detection and even outperforms a number of distributional semantic models.
arXiv Detail & Related papers (2026-01-06T10:25:36Z) - Improving Semantic Uncertainty Quantification in LVLMs with Semantic Gaussian Processes [60.75226150503949]
We propose a Bayesian framework that quantifies semantic uncertainty by analyzing the geometric structure of answer embeddings.<n>S GPU maps generated answers into a dense semantic space, computes the Gram matrix of their semantic embeddings, and summarizes their semantic configuration.<n>We show that S GPU transfers across models and modalities, indicating that its spectral representation captures general patterns of semantic uncertainty.
arXiv Detail & Related papers (2025-12-16T08:15:24Z) - MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time Series [4.664512594743523]
We introduce MASCOTS, a method that generates meaningful and diverse counterfactual observations in a model-agnostic manner.<n>By operating in a symbolic feature space, MASCOTS enhances interpretability while preserving fidelity to the original data and model.
arXiv Detail & Related papers (2025-03-28T12:48:12Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Semantic-aware Contrastive Learning for More Accurate Semantic Parsing [32.74456368167872]
We propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations.
Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines.
arXiv Detail & Related papers (2023-01-19T07:04:32Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Grammatical Profiling for Semantic Change Detection [6.3596637237946725]
We use grammatical profiling as an alternative method for semantic change detection.
We demonstrate that it can be used for semantic change detection and even outperforms some distributional semantic methods.
arXiv Detail & Related papers (2021-09-21T18:38:18Z) - A comprehensive comparative evaluation and analysis of Distributional
Semantic Models [61.41800660636555]
We perform a comprehensive evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT.
The results show that the alleged superiority of predict based models is more apparent than real, and surely not ubiquitous.
We borrow from cognitive neuroscience the methodology of Representational Similarity Analysis (RSA) to inspect the semantic spaces generated by distributional models.
arXiv Detail & Related papers (2021-05-20T15:18:06Z) - SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical
Semantic Change [58.87961226278285]
This paper describes SChME, a method used in SemEval-2020 Task 1 on unsupervised detection of lexical semantic change.
SChME usesa model ensemble combining signals of distributional models (word embeddings) and wordfrequency models where each model casts a vote indicating the probability that a word sufferedsemantic change according to that feature.
arXiv Detail & Related papers (2020-12-02T23:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.