CRIL: A Concurrent Reversible Intermediate Language
- URL: http://arxiv.org/abs/2309.07310v1
- Date: Wed, 13 Sep 2023 20:52:54 GMT
- Title: CRIL: A Concurrent Reversible Intermediate Language
- Authors: Shunya Oguchi (Graduate School of Informatics, Nagoya University),
Shoji Yuen (Graduate School of Informatics, Nagoya University)
- Abstract summary: We present a reversible intermediate language with composing for translating a high-level concurrent programming language to another lower-level concurrent programming language, keeping reversibility.
We propose CRIL as an extension of RIL used by Mogensen for a functional reversible language, incorporating a multi-thread process invocation and the synchronization primitives based on the P-V operations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a reversible intermediate language with concurrency for
translating a high-level concurrent programming language to another lower-level
concurrent programming language, keeping reversibility. Intermediate languages
are commonly used in compiling a source program to an object code program
closer to the machine code, where an intermediate language enables behavioral
analysis and optimization to be decomposed in steps. We propose CRIL
(Concurrent Reversible Intermediate Language) as an extension of RIL used by
Mogensen for a functional reversible language, incorporating a multi-thread
process invocation and the synchronization primitives based on the P-V
operations. We show that the operational semantics of CRIL enjoy the properties
of reversibility, including the causal safety and causal liveness proposed by
Lanese et al., checking the axiomatic properties. The operational semantics is
defined by composing the bidirectional control flow with the dependency
information on updating the memory, called annotation DAG. We show a simple
example of `airline ticketing' to illustrate how CRIL preserves the causality
for reversibility in imperative programs with concurrency.
Related papers
- ReF Decompile: Relabeling and Function Call Enhanced Decompile [50.86228893636785]
The goal of decompilation is to convert compiled low-level code (e.g., assembly code) back into high-level programming languages.
This task supports various reverse engineering applications, such as vulnerability identification, malware analysis, and legacy software migration.
arXiv Detail & Related papers (2025-02-17T12:38:57Z) - Synthetic Programming Elicitation for Text-to-Code in Very Low-Resource Programming and Formal Languages [21.18996339478024]
We introduce emphsynthetic programming elicitation and compilation (SPEAC)
SPEAC produces syntactically correct programs more frequently and without sacrificing semantic correctness.
We empirically evaluate the performance of SPEAC in a case study for the UCLID5 formal verification language.
arXiv Detail & Related papers (2024-06-05T22:16:19Z) - AdaCCD: Adaptive Semantic Contrasts Discovery Based Cross Lingual
Adaptation for Code Clone Detection [69.79627042058048]
AdaCCD is a novel cross-lingual adaptation method that can detect cloned codes in a new language without annotations in that language.
We evaluate the cross-lingual adaptation results of AdaCCD by constructing a multilingual code clone detection benchmark consisting of 5 programming languages.
arXiv Detail & Related papers (2023-11-13T12:20:48Z) - Program Translation via Code Distillation [20.668229308907495]
Traditional machine translation relies on parallel corpora for supervised translation.
Recent unsupervised neural machine translation techniques have overcome data limitations.
We propose a novel model called Code Distillation (CoDist)
arXiv Detail & Related papers (2023-10-17T04:59:15Z) - Language-Oriented Communication with Semantic Coding and Knowledge
Distillation for Text-to-Image Generation [53.97155730116369]
We put forward a novel framework of language-oriented semantic communication (LSC)
In LSC, machines communicate using human language messages that can be interpreted and manipulated via natural language processing (NLP) techniques for SC efficiency.
We introduce three innovative algorithms: 1) semantic source coding (SSC), which compresses a text prompt into its key head words capturing the prompt's syntactic essence; 2) semantic channel coding ( SCC), that improves robustness against errors by substituting head words with their lenghthier synonyms; and 3) semantic knowledge distillation (SKD), that produces listener-customized prompts via in-context learning the listener's
arXiv Detail & Related papers (2023-09-20T08:19:05Z) - Bidirectional Correlation-Driven Inter-Frame Interaction Transformer for
Referring Video Object Segmentation [44.952526831843386]
We propose a correlation-driven inter-frame interaction Transformer, dubbed BIFIT, to address these issues in RVOS.
Specifically, we design a lightweight plug-and-play inter-frame interaction module in the decoder.
A vision-ferring interaction is implemented before the Transformer to facilitate the correlation between the visual and linguistic features.
arXiv Detail & Related papers (2023-07-02T10:29:35Z) - An Interleaving Semantics of the Timed Concurrent Language for
Argumentation to Model Debates and Dialogue Games [0.0]
We propose a language for modelling concurrent interaction between agents.
Such a language exploits a shared memory used by the agents to communicate and reason on the acceptability of their beliefs.
We show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents.
arXiv Detail & Related papers (2023-06-13T10:41:28Z) - Semantics-Aware Dynamic Localization and Refinement for Referring Image
Segmentation [102.25240608024063]
Referring image segments an image from a language expression.
We develop an algorithm that shifts from being localization-centric to segmentation-language.
Compared to its counterparts, our method is more versatile yet effective.
arXiv Detail & Related papers (2023-03-11T08:42:40Z) - Augmented Language Models: a Survey [55.965967655575454]
This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools.
We refer to them as Augmented Language Models (ALMs)
The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks.
arXiv Detail & Related papers (2023-02-15T18:25:52Z) - Multimodal Transformer with Variable-length Memory for
Vision-and-Language Navigation [79.1669476932147]
Vision-and-Language Navigation (VLN) is a task that an agent is required to follow a language instruction to navigate to the goal position.
Recent Transformer-based VLN methods have made great progress benefiting from the direct connections between visual observations and the language instruction.
We introduce Multimodal Transformer with Variable-length Memory (MTVM) for visually-grounded natural language navigation.
arXiv Detail & Related papers (2021-11-10T16:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.