Self-Replicating Mechanical Universal Turing Machine
- URL: http://arxiv.org/abs/2409.19037v1
- Date: Fri, 27 Sep 2024 08:28:37 GMT
- Title: Self-Replicating Mechanical Universal Turing Machine
- Authors: Ralph P. Lano,
- Abstract summary: This paper presents the implementation of a self-replicating finite-state machine (FSM) and a self-replicating Turing Machine (TM) using bio-inspired mechanisms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the implementation of a self-replicating finite-state machine (FSM) and a self-replicating Turing Machine (TM) using bio-inspired mechanisms. Building on previous work that introduced self-replicating structures capable of sorting, copying, and reading information, this study demonstrates the computational power of these mechanisms by explicitly constructing a functioning FSM and TM. This study demonstrates the universality of the system by emulating the UTM(5,5) of Neary and Woods.
Related papers
- Finite State Machine with Input and Process Render [0.22940141855172028]
In this poster, we created an automatic visualization tool for Finite State Machines (FSMs) that generates videos of FSM simulation.
Educators can input any formal definition of an FSM and an input string, and FSMIPR generates an accompanying video of its simulation.
arXiv Detail & Related papers (2024-09-25T16:14:15Z) - Mechanical Self-replication [0.0]
This study presents a theoretical model for a self-replicating mechanical system inspired by biological processes within living cells.
The model decomposes self-replication into core components, each of which is executed by a single machine constructed from a set of basic block types.
arXiv Detail & Related papers (2024-07-18T09:49:50Z) - The Buffer Mechanism for Multi-Step Information Reasoning in Language Models [52.77133661679439]
Investigating internal reasoning mechanisms of large language models can help us design better model architectures and training strategies.
In this study, we constructed a symbolic dataset to investigate the mechanisms by which Transformer models employ vertical thinking strategy.
We proposed a random matrix-based algorithm to enhance the model's reasoning ability, resulting in a 75% reduction in the training time required for the GPT-2 model.
arXiv Detail & Related papers (2024-05-24T07:41:26Z) - A Multimodal Automated Interpretability Agent [63.8551718480664]
MAIA is a system that uses neural models to automate neural model understanding tasks.
We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images.
We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
arXiv Detail & Related papers (2024-04-22T17:55:11Z) - Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals [82.68757839524677]
Interpretability research aims to bridge the gap between empirical success and our scientific understanding of large language models (LLMs)
We propose a formulation of competition of mechanisms, which focuses on the interplay of multiple mechanisms instead of individual mechanisms.
Our findings show traces of the mechanisms and their competition across various model components and reveal attention positions that effectively control the strength of certain mechanisms.
arXiv Detail & Related papers (2024-02-18T17:26:51Z) - Transformer Mechanisms Mimic Frontostriatal Gating Operations When
Trained on Human Working Memory Tasks [19.574270595733502]
We analyze the mechanisms that emerge within a vanilla attention-only Transformer trained on a simple sequence modeling task.
We find that, as a result of training, the self-attention mechanism within the Transformer specializes in a way that mirrors the input and output gating mechanisms.
arXiv Detail & Related papers (2024-02-13T04:28:43Z) - Pure Differential Privacy for Functional Summaries via a Laplace-like
Process [8.557392136621894]
This work introduces a novel mechanism for differential privacy on functional summaries.
The Independent Component Laplace Process (ICLP) mechanism treats the summaries of interest as truly infinite-dimensional objects.
Numerical experiments on synthetic and real datasets demonstrate the efficacy of the proposed mechanism.
arXiv Detail & Related papers (2023-08-31T20:24:51Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Transformers with Competitive Ensembles of Independent Mechanisms [97.93090139318294]
We propose a new Transformer layer which divides the hidden representation and parameters into multiple mechanisms, which only exchange information through attention.
We study TIM on a large-scale BERT model, on the Image Transformer, and on speech enhancement and find evidence for semantically meaningful specialization as well as improved performance.
arXiv Detail & Related papers (2021-02-27T21:48:46Z) - Generative Language Modeling for Automated Theorem Proving [94.01137612934842]
This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans might be addressable via generation from language models.
We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance.
arXiv Detail & Related papers (2020-09-07T19:50:10Z) - Is Attention All What You Need? -- An Empirical Investigation on
Convolution-Based Active Memory and Self-Attention [7.967230034960396]
We evaluate whether various active-memory mechanisms could replace self-attention in a Transformer.
Experiments suggest that active-memory alone achieves comparable results to the self-attention mechanism for language modelling.
For some specific algorithmic tasks, active-memory mechanisms alone outperform both self-attention and a combination of the two.
arXiv Detail & Related papers (2019-12-27T02:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.