On the Evolution of Programming Languages
- URL: http://arxiv.org/abs/2007.02699v1
- Date: Sat, 27 Jun 2020 10:18:14 GMT
- Title: On the Evolution of Programming Languages
- Authors: K. R. Chowdhary
- Abstract summary: It tries to give supportive evidence that the new languages are more robust than the previous.
An analysis of most prominent programming languages is presented, emphasizing on how the features of existing languages have influenced the development of new programming languages.
At the end, it suggests a set of experimental languages, which may rule the world of programming languages in the time of new multi-core architectures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper attempts to connects the evolution of computer languages with the
evolution of life, where the later has been dictated by \emph{theory of
evolution of species}, and tries to give supportive evidence that the new
languages are more robust than the previous, carry-over the mixed features of
older languages, such that strong features gets added into them and weak
features of older languages gets removed. In addition, an analysis of most
prominent programming languages is presented, emphasizing on how the features
of existing languages have influenced the development of new programming
languages. At the end, it suggests a set of experimental languages, which may
rule the world of programming languages in the time of new multi-core
architectures.
Index terms- Programming languages' evolution, classifications of languages,
future languages, scripting-languages.
Related papers
- The Role of Language Imbalance in Cross-lingual Generalisation: Insights from Cloned Language Experiments [57.273662221547056]
In this study, we investigate an unintuitive novel driver of cross-lingual generalisation: language imbalance.
We observe that the existence of a predominant language during training boosts the performance of less frequent languages.
As we extend our analysis to real languages, we find that infrequent languages still benefit from frequent ones, yet whether language imbalance causes cross-lingual generalisation there is not conclusive.
arXiv Detail & Related papers (2024-04-11T17:58:05Z) - MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling [70.34758460372629]
We introduce a new paradigm that encodes the same information with segments of consistent size across diverse languages.
MYTE produces shorter encodings for all 99 analyzed languages.
This, in turn, improves multilingual LM performance and diminishes the perplexity gap throughout diverse languages.
arXiv Detail & Related papers (2024-03-15T21:21:11Z) - A Simple Framework to Accelerate Multilingual Language Model for
Monolingual Text Generation [3.997809845676912]
This study introduces a novel framework designed to expedite text generation in non-English languages.
It predicts larger linguistic units than those of conventional multilingual tokenizers and is specifically tailored to the target language.
Our empirical results demonstrate that the proposed framework increases the generation speed by a factor of 1.9 compared to standard decoding.
arXiv Detail & Related papers (2024-01-19T12:26:57Z) - AdaCCD: Adaptive Semantic Contrasts Discovery Based Cross Lingual
Adaptation for Code Clone Detection [69.79627042058048]
AdaCCD is a novel cross-lingual adaptation method that can detect cloned codes in a new language without annotations in that language.
We evaluate the cross-lingual adaptation results of AdaCCD by constructing a multilingual code clone detection benchmark consisting of 5 programming languages.
arXiv Detail & Related papers (2023-11-13T12:20:48Z) - Language Agnostic Code Embeddings [61.84835551549612]
We focus on the cross-lingual capabilities of code embeddings across different programming languages.
Code embeddings comprise two distinct components: one deeply tied to the nuances and syntax of a specific language, and the other remaining agnostic to these details.
We show that when we isolate and eliminate this language-specific component, we witness significant improvements in downstream code retrieval tasks.
arXiv Detail & Related papers (2023-10-25T17:34:52Z) - The Less the Merrier? Investigating Language Representation in
Multilingual Models [8.632506864465501]
We investigate the linguistic representation of different languages in multilingual models.
We observe from our experiments that community-centered models perform better at distinguishing between languages in the same family for low-resource languages.
arXiv Detail & Related papers (2023-10-20T02:26:34Z) - On the Impact of Language Selection for Training and Evaluating
Programming Language Models [16.125924759649106]
We evaluate the similarity of programming languages by analyzing their representations using a CodeBERT-based model.
Our experiments reveal that token representation in languages such as C++, Python, and Java exhibit proximity to one another, whereas the same tokens in languages such as Mathematica and R display significant dissimilarity.
arXiv Detail & Related papers (2023-08-25T12:57:59Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages [76.93265104421559]
We benchmark code generation from natural language commands extending beyond English.
We annotated a total of 896 NL-code pairs in three languages: Spanish, Japanese, and Russian.
While the difficulties vary across these three languages, all systems lag significantly behind their English counterparts.
arXiv Detail & Related papers (2022-03-16T04:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.