Transparent Semantic Change Detection with Dependency-Based Profiles
- URL: http://arxiv.org/abs/2601.02891v1
- Date: Tue, 06 Jan 2026 10:25:36 GMT
- Title: Transparent Semantic Change Detection with Dependency-Based Profiles
- Authors: Bach Phan-Tat, Kris Heylen, Dirk Geeraerts, Stefano De Pascale, Dirk Speelman,
- Abstract summary: We investigate an alternative method which relies purely on dependency co-occurrence patterns of words.<n>We demonstrate that it is effective for semantic change detection and even outperforms a number of distributional semantic models.
- Score: 1.1340133299604382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most modern computational approaches to lexical semantic change detection (LSC) rely on embedding-based distributional word representations with neural networks. Despite the strong performance on LSC benchmarks, they are often opaque. We investigate an alternative method which relies purely on dependency co-occurrence patterns of words. We demonstrate that it is effective for semantic change detection and even outperforms a number of distributional semantic models. We provide an in-depth quantitative and qualitative analysis of the predictions, showing that they are plausible and interpretable.
Related papers
- ReFRAME or Remain: Unsupervised Lexical Semantic Change Detection with Frame Semantics [1.1340133299604382]
We develop a new method for detecting semantic change based on frame semantics.<n>We show that this method is effective for detecting semantic change and can even outperform many distributional semantic models.
arXiv Detail & Related papers (2026-02-04T13:00:49Z) - Improving Semantic Uncertainty Quantification in LVLMs with Semantic Gaussian Processes [60.75226150503949]
We propose a Bayesian framework that quantifies semantic uncertainty by analyzing the geometric structure of answer embeddings.<n>S GPU maps generated answers into a dense semantic space, computes the Gram matrix of their semantic embeddings, and summarizes their semantic configuration.<n>We show that S GPU transfers across models and modalities, indicating that its spectral representation captures general patterns of semantic uncertainty.
arXiv Detail & Related papers (2025-12-16T08:15:24Z) - MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time Series [4.664512594743523]
We introduce MASCOTS, a method that generates meaningful and diverse counterfactual observations in a model-agnostic manner.<n>By operating in a symbolic feature space, MASCOTS enhances interpretability while preserving fidelity to the original data and model.
arXiv Detail & Related papers (2025-03-28T12:48:12Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Learning Semantic Textual Similarity via Topic-informed Discrete Latent
Variables [17.57873577962635]
We develop a topic-informed discrete latent variable model for semantic textual similarity.
Our model learns a shared latent space for sentence-pair representation via vector quantization.
We show that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
arXiv Detail & Related papers (2022-11-07T15:09:58Z) - Learning Disentangled Representations for Natural Language Definitions [0.0]
We argue that recurrent syntactic and semantic regularities in textual data can be used to provide the models with both structural biases and generative factors.
We leverage the semantic structures present in a representative and semantically dense category of sentence types, definitional sentences, for training a Variational Autoencoder to learn disentangled representations.
arXiv Detail & Related papers (2022-09-22T14:31:55Z) - Adaptive Discrete Communication Bottlenecks with Dynamic Vector
Quantization [76.68866368409216]
We propose learning to dynamically select discretization tightness conditioned on inputs.
We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks.
arXiv Detail & Related papers (2022-02-02T23:54:26Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Grammatical Profiling for Semantic Change Detection [6.3596637237946725]
We use grammatical profiling as an alternative method for semantic change detection.
We demonstrate that it can be used for semantic change detection and even outperforms some distributional semantic methods.
arXiv Detail & Related papers (2021-09-21T18:38:18Z) - A comprehensive comparative evaluation and analysis of Distributional
Semantic Models [61.41800660636555]
We perform a comprehensive evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT.
The results show that the alleged superiority of predict based models is more apparent than real, and surely not ubiquitous.
We borrow from cognitive neuroscience the methodology of Representational Similarity Analysis (RSA) to inspect the semantic spaces generated by distributional models.
arXiv Detail & Related papers (2021-05-20T15:18:06Z) - Disentangled Contrastive Learning for Learning Robust Textual
Representations [13.880693856907037]
We introduce the concept of momentum representation consistency to align features and leverage power normalization while conforming the uniformity.
Our experimental results for the NLP benchmarks demonstrate that our approach can obtain better results compared with the baselines.
arXiv Detail & Related papers (2021-04-11T03:32:49Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.