Abstract Geometrical Computation 11: Slanted Firing Squad
Synchronisation on Signal Machines
- URL: http://arxiv.org/abs/2106.11176v1
- Date: Mon, 21 Jun 2021 15:15:01 GMT
- Title: Abstract Geometrical Computation 11: Slanted Firing Squad
Synchronisation on Signal Machines
- Authors: J\'er\^ome Durand-Lose and Aur\'elien Emmanuel
- Abstract summary: Firing Squad Synchronisation on Cellular Automata is the dynamical synchronisation of finitely many cells without any prior knowledge of their range.
Most of the proposed constructions naturally translate to the continuous setting of signal machines.
This paper provides basic tools for further study of computable accumulation lines in the signal machine model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Firing Squad Synchronisation on Cellular Automata is the dynamical
synchronisation of finitely many cells without any prior knowledge of their
range. This can be conceived as a signal with an infinite speed. Most of the
proposed constructions naturally translate to the continuous setting of signal
machines and generate fractal figures with an accumulation on a horizontal
line, i.e. synchronously, in the space-time diagram. Signal machines are
studied in a series of articles named Abstract Geometrical Computation.
In the present article, we design a signal machine that is able to
synchronise/accumulate on any non-infinite slope. The slope is encoded in the
initial configuration. This is done by constructing an infinite tree such that
each node computes the way the tree expands.
The interest of Abstract Geometrical computation is to do away with the
constraint of discrete space, while tackling new difficulties from continuous
space. The interest of this paper in particular is to provide basic tools for
further study of computable accumulation lines in the signal machine model.
Related papers
- SWING: Unlocking Implicit Graph Representations for Graph Random Features [57.956136773668476]
We propose SWING: Space Walks for Implicit Network Graphs, a new class of algorithms for computations involving Graph Random Features on graphs.<n>We provide detailed analysis of SWING and complement it with thorough experiments on different classes of i-graphs.
arXiv Detail & Related papers (2026-02-13T08:12:38Z) - Stationarity and Spectral Characterization of Random Signals on Simplicial Complexes [45.01439616647312]
We propose a probabilistic framework for random signals defined on simplicial complexes.<n>Specifically, we generalize the classical notion of stationarity.<n>We empirically demonstrate the practicality of these benefits through multiple synthetic and real-world simulations.
arXiv Detail & Related papers (2026-02-03T03:31:10Z) - Generative Adversarial Gumbel MCTS for Abstract Visual Composition Generation [29.755551944026738]
We study abstract visual composition in which identity is determined by the configuration and relations among a small set of geometric primitives.<n>An AlphaGo-style search enforces feasibility, while a fine-tuned vision-language model scores semantic alignment as reward signals.<n>Inspired by the Generative Adversarial Network, we use the generated instances for adversarial reward refinement.
arXiv Detail & Related papers (2025-12-01T03:38:44Z) - On the Holographic Geometry of Deterministic Computation [0.0]
Standard simulations of Turing machines suggest a linear relationship between the temporal duration $t$ of a run and the amount of information that must be stored at time $t$.<n>We show that any length-$t$ run can be simulated using $O(sqrtt)$ work-tape cells via a Height Compression Theorem for succinct trees together with an Algebraic Replay Engine.
arXiv Detail & Related papers (2025-11-29T19:47:22Z) - Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters [10.403248386029407]
Self-attention is a significant computational bottleneck due to its complexity in the sequence length.
In this work, we derive the scalar energy function whose gradient computes the self-attention block.
Our formulation reveals that the reduction across the sequence axis can be efficiently computed in parallel through a tree reduction.
arXiv Detail & Related papers (2024-08-07T21:16:55Z) - Simulation of Graph Algorithms with Looped Transformers [6.0465914748433915]
We study the ability of transformer networks to simulate algorithms on graphs from a theoretical perspective.
We prove by construction that this architecture can simulate individual algorithms such as Dijkstra's shortest path.
We show a Turing Completeness result with constant width when the extra attention heads are utilized.
arXiv Detail & Related papers (2024-02-02T02:48:03Z) - Information-Theoretic Thresholds for Planted Dense Cycles [52.076657911275525]
We study a random graph model for small-world networks which are ubiquitous in social and biological sciences.
For both detection and recovery of the planted dense cycle, we characterize the information-theoretic thresholds in terms of $n$, $tau$, and an edge-wise signal-to-noise ratio $lambda$.
arXiv Detail & Related papers (2024-02-01T03:39:01Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Recovering the Graph Underlying Networked Dynamical Systems under
Partial Observability: A Deep Learning Approach [7.209528581296429]
We study the problem of graph structure identification, i.e., of recovering the graph of dependencies among time series.
We devise a new feature vector computed from the observed time series and prove that these features are linearly separable.
We use these features to train Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2022-08-08T20:32:28Z) - Gransformer: Transformer-based Graph Generation [14.161975556325796]
Gransformer is an algorithm based on Transformer for generating graphs.
We modify the Transformer encoder to exploit the structural information of the given graph.
We also introduce a graph-based familiarity measure between node pairs.
arXiv Detail & Related papers (2022-03-25T14:05:12Z) - Temporally-Consistent Surface Reconstruction using Metrically-Consistent
Atlases [131.50372468579067]
We propose a method for unsupervised reconstruction of a temporally-consistent sequence of surfaces from a sequence of time-evolving point clouds.
We represent the reconstructed surfaces as atlases computed by a neural network, which enables us to establish correspondences between frames.
Our approach outperforms state-of-the-art ones on several challenging datasets.
arXiv Detail & Related papers (2021-11-12T17:48:25Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - VersaGNN: a Versatile accelerator for Graph neural networks [81.1667080640009]
We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
arXiv Detail & Related papers (2021-05-04T04:10:48Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z) - Neural Granular Sound Synthesis [53.828476137089325]
Granular sound synthesis is a popular audio generation technique based on rearranging sequences of small waveform windows.
We show that generative neural networks can implement granular synthesis while alleviating most of its shortcomings.
arXiv Detail & Related papers (2020-08-04T08:08:00Z) - Breaking the waves: asymmetric random periodic features for low-bitrate
kernel machines [13.704881067616995]
We introduce the general framework of asymmetric random periodic features, where the two signals of interest are observed through random periodic features.
We prove uniform probabilistic error bounds holding for all signal pairs from an infinite low-complexity set.
arXiv Detail & Related papers (2020-04-14T14:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.