ImmunoLingo: Linguistics-based formalization of the antibody language
- URL: http://arxiv.org/abs/2209.12635v1
- Date: Mon, 26 Sep 2022 12:33:14 GMT
- Title: ImmunoLingo: Linguistics-based formalization of the antibody language
- Authors: Mai Ha Vu, Philippe A. Robert, Rahmad Akbar, Bartlomiej Swiatczak,
Geir Kjetil Sandve, Dag Trygve Truslew Haug, Victor Greiff
- Abstract summary: Apparent parallels between natural language and biological sequence have led to a surge in the application of deep language models (LMs)
A lack of a rigorous linguistic formalization of biological sequence languages has led to largely domain-unspecific applications of LMs.
A linguistic formalization establishes linguistically-informed and thus domain-adapted components for LM applications.
- Score: 0.5412332666265471
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Apparent parallels between natural language and biological sequence have led
to a recent surge in the application of deep language models (LMs) to the
analysis of antibody and other biological sequences. However, a lack of a
rigorous linguistic formalization of biological sequence languages, which would
define basic components, such as lexicon (i.e., the discrete units of the
language) and grammar (i.e., the rules that link sequence well-formedness,
structure, and meaning) has led to largely domain-unspecific applications of
LMs, which do not take into account the underlying structure of the biological
sequences studied. A linguistic formalization, on the other hand, establishes
linguistically-informed and thus domain-adapted components for LM applications.
It would facilitate a better understanding of how differences and similarities
between natural language and biological sequences influence the quality of LMs,
which is crucial for the design of interpretable models with extractable
sequence-functions relationship rules, such as the ones underlying the antibody
specificity prediction problem. Deciphering the rules of antibody specificity
is crucial to accelerating rational and in silico biotherapeutic drug design.
Here, we formalize the properties of the antibody language and thereby
establish not only a foundation for the application of linguistic tools in
adaptive immune receptor analysis but also for the systematic immunolinguistic
studies of immune receptor specificity in general.
Related papers
- From Sentences to Sequences: Rethinking Languages in Biological System [6.304152224988003]
We revisit the notion of language in biological systems to better understand how NLP successes can be effectively translated to biological domains.<n>By treating the 3D structure of biomolecules as the semantic content of a sentence, we highlight the importance of structural evaluation.
arXiv Detail & Related papers (2025-07-01T16:57:39Z) - Can Language Models Learn Typologically Implausible Languages? [62.823015163987996]
Grammatical features across human languages show intriguing correlations often attributed to learning biases in humans.
We discuss how language models (LMs) allow us to better determine the role of domain-general learning biases in language universals.
We test LMs on an array of highly naturalistic but counterfactual versions of the English (head-initial) and Japanese (head-final) languages.
arXiv Detail & Related papers (2025-02-17T20:40:01Z) - Measuring Grammatical Diversity from Small Corpora: Derivational Entropy Rates, Mean Length of Utterances, and Annotation Invariance [0.0]
I show that a grammar's derivational entropy and the mean length of the utterances it generates are fundamentally linked.
I demonstrate that MLU is not a mere proxy, but a fundamental measure of syntactic diversity.
The derivational entropy rate indexes the rate at which different grammatical annotation frameworks determine the grammatical complexity of treebanks.
arXiv Detail & Related papers (2024-12-08T22:54:57Z) - Evaluating Morphological Compositional Generalization in Large Language Models [17.507983593566223]
We investigate the morphological generalization abilities of large language models (LLMs) through the lens of compositionality.
We focus on agglutinative languages such as Turkish and Finnish.
Our analysis shows that LLMs struggle with morphological compositional generalization particularly when applied to novel word roots.
While models can identify individual morphological combinations better than chance, their performance lacks systematicity, leading to significant accuracy gaps compared to humans.
arXiv Detail & Related papers (2024-10-16T15:17:20Z) - Linguistic Structure from a Bottleneck on Sequential Information Processing [5.850665541267672]
We show that natural-language-like systematicity arises in codes that are constrained by predictive information.
We show that human languages are structured to have low predictive information at the levels of phonology, morphology, syntax, and semantics.
arXiv Detail & Related papers (2024-05-20T15:25:18Z) - How Important Is Tokenization in French Medical Masked Language Models? [7.866517623371908]
Subword tokenization has become the prevailing standard in the field of natural language processing (NLP)
This paper seeks to delve into the complexities of subword tokenization in French biomedical domain across a variety of NLP tasks.
We introduce an original tokenization strategy that integrates morpheme-enriched word segmentation into existing tokenization methods.
arXiv Detail & Related papers (2024-02-22T23:11:08Z) - Linguistic laws in biology [0.13812010983144798]
Linguistic laws have been investigated by quantitative linguists for nearly a century.
Biologists from a range of disciplines have started to explore the prevalence of these laws beyond language.
We propose a new conceptual framework for the study of linguistic laws in biology.
arXiv Detail & Related papers (2023-10-11T11:08:20Z) - Interactive Molecular Discovery with Natural Language [69.89287960545903]
We propose the conversational molecular design, a novel task adopting natural language for describing and editing target molecules.
To better accomplish this task, we design ChatMol, a knowledgeable and versatile generative pre-trained model, enhanced by injecting experimental property information.
arXiv Detail & Related papers (2023-06-21T02:05:48Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Reprogramming Pretrained Language Models for Antibody Sequence Infilling [72.13295049594585]
Computational design of antibodies involves generating novel and diverse sequences, while maintaining structural consistency.
Recent deep learning models have shown impressive results, however the limited number of known antibody sequence/structure pairs frequently leads to degraded performance.
In our work we address this challenge by leveraging Model Reprogramming (MR), which repurposes pretrained models on a source language to adapt to the tasks that are in a different language and have scarce data.
arXiv Detail & Related papers (2022-10-05T20:44:55Z) - Linguistically inspired roadmap for building biologically reliable
protein language models [0.5412332666265471]
We argue that guidance drawn from linguistics can aid with building more interpretable protein LMs.
We provide a linguistics-based roadmap for protein LM pipeline choices with regard to training data, tokenization, token embedding, sequence embedding, and model interpretation.
arXiv Detail & Related papers (2022-07-03T08:42:44Z) - Constructing a Family Tree of Ten Indo-European Languages with
Delexicalized Cross-linguistic Transfer Patterns [57.86480614673034]
We formalize the delexicalized transfer as interpretable tree-to-string and tree-to-tree patterns.
This allows us to quantitatively probe cross-linguistic transfer and extend inquiries of Second Language Acquisition.
arXiv Detail & Related papers (2020-07-17T15:56:54Z) - Evaluating Transformer-Based Multilingual Text Classification [55.53547556060537]
We argue that NLP tools perform unequally across languages with different syntactic and morphological structures.
We calculate word order and morphological similarity indices to aid our empirical study.
arXiv Detail & Related papers (2020-04-29T03:34:53Z) - Where New Words Are Born: Distributional Semantic Analysis of Neologisms
and Their Semantic Neighborhoods [51.34667808471513]
We investigate the importance of two factors, semantic sparsity and frequency growth rates of semantic neighbors, formalized in the distributional semantics paradigm.
We show that both factors are predictive word emergence although we find more support for the latter hypothesis.
arXiv Detail & Related papers (2020-01-21T19:09:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.