Towards explainable evaluation of language models on the semantic
similarity of visual concepts
- URL: http://arxiv.org/abs/2209.03723v1
- Date: Thu, 8 Sep 2022 11:40:57 GMT
- Title: Towards explainable evaluation of language models on the semantic
similarity of visual concepts
- Authors: Maria Lymperaiou, George Manoliadis, Orfeas Menis Mastromichalakis,
Edmund G. Dervakos and Giorgos Stamou
- Abstract summary: We examine the behavior of high-performing pre-trained language models, focusing on the task of semantic similarity for visual vocabularies.
First, we address the need for explainable evaluation metrics, necessary for understanding the conceptual quality of retrieved instances.
Secondly, adversarial interventions on salient query semantics expose vulnerabilities of opaque metrics and highlight patterns in learned linguistic representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent breakthroughs in NLP research, such as the advent of Transformer
models have indisputably contributed to major advancements in several tasks.
However, few works research robustness and explainability issues of their
evaluation strategies. In this work, we examine the behavior of high-performing
pre-trained language models, focusing on the task of semantic similarity for
visual vocabularies. First, we address the need for explainable evaluation
metrics, necessary for understanding the conceptual quality of retrieved
instances. Our proposed metrics provide valuable insights in local and global
level, showcasing the inabilities of widely used approaches. Secondly,
adversarial interventions on salient query semantics expose vulnerabilities of
opaque metrics and highlight patterns in learned linguistic representations.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.