A Mind Cannot Be Smeared Across Time
- URL: http://arxiv.org/abs/2601.11620v1
- Date: Sun, 11 Jan 2026 01:08:33 GMT
- Title: A Mind Cannot Be Smeared Across Time
- Authors: Michael Timothy Bennett,
- Abstract summary: I show that conscious experience appears unified and simultaneous.<n>I introduce a precise temporal semantics over windowed trajectories.<n>I review neurophysiological evidence suggesting that consciousness depends on phase synchrony and effective connectivity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Whether machines can be conscious depends not only on what they compute, but \emph{when} they compute it. Most deployed artificial systems realise their functions via sequential or time-multiplexed updates. Conscious experience appears unified and simultaneous. I show that this difference matters formally. I augment Stack Theory with algebraic laws relating within time-window constraint satisfaction to conjunction. I introduce a precise temporal semantics over windowed trajectories $τ^{Δ,s}$ and prove that existential temporal realisation $\Diamond_Δ$ does not preserve conjunction. A system can realise all the ingredients of experience across time without ever instantiating the experienced conjunction itself. I then distinguish two postulates. StrongSync requires objective co-instantiation of the grounded conjunction within the window, while WeakSync permits temporal ``smearing''. I formalise concurrency-capacity to measure what is needed to satisfy StrongSync. Finally, I review neurophysiological evidence suggesting that consciousness depends on phase synchrony and effective connectivity, and that loss of consciousness is often associated with its breakdown. This evidence makes WeakSync less plausible. Under StrongSync, software consciousness on strictly sequential substrates is impossible for contents whose grounding requires two or more simultaneous contributors. The more parts from which simultaneous contribution required, the more concurrency capacity is required. The hardware matters. Consciousness attribution therefore requires architectural inspection, not just functional performance.
Related papers
- EgoExo-Con: Exploring View-Invariant Video Temporal Understanding [66.25513481642845]
Can Video-LLMs achieve consistent temporal understanding when videos capture the same event from different viewpoints?<n>EgoExo-Con (Consistency) is a benchmark of comprehensively synchronized egocentric and exocentric video pairs with human-refined queries in natural language.<n>We propose View-GRPO, a novel reinforcement learning framework that effectively strengthens view-specific temporal reasoning.
arXiv Detail & Related papers (2025-10-30T03:53:22Z) - SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting [25.523486023087916]
A lifelike talking head requires synchronized coordination of subject identity, lip movements, facial expressions, and head poses.<n>We introduce SyncTalk++ to address the critical issue of synchronization, identified as the ''devil'' in creating realistic talking heads.<n>Our approach maintains consistency and continuity in visual details across frames and significantly improves rendering speed and quality, achieving up to 101 frames per second.
arXiv Detail & Related papers (2025-06-17T17:22:12Z) - CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models [12.277869260176068]
Token sparsity mitigates inefficiencies in token usage, while neuron sparsity reduces high-dimensional computations.<n>Recently, these two sparsity paradigms have evolved largely in parallel, fostering the prevailing assumption that they function independently.<n>We propose CoreMatching, a co-adaptive sparse inference framework, which leverages the synergy between token and neuron sparsity to enhance inference efficiency.
arXiv Detail & Related papers (2025-05-25T17:16:34Z) - SyncMind: Measuring Agent Out-of-Sync Recovery in Collaborative Software Engineering [74.04271300772155]
SyncMind is a framework that systematically defines the out-of-sync problem faced by large language model (LLM) agents in software engineering.<n>Based on SyncMind, we create SyncBench, a benchmark featuring 24,332 instances of agent out-of-sync scenarios in real-world CSE.
arXiv Detail & Related papers (2025-02-10T19:38:36Z) - AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising [49.785626309848276]
AsyncDiff is a universal and plug-and-play acceleration scheme that enables model parallelism across multiple devices.
For the Stable Diffusion v2.1, AsyncDiff achieves a 2.7x speedup with negligible degradation and a 4.0x speedup with only a slight reduction of 0.38 in CLIP Score.
Our experiments also demonstrate that AsyncDiff can be readily applied to video diffusion models with encouraging performances.
arXiv Detail & Related papers (2024-06-11T03:09:37Z) - Once Upon a $\textit{Time}$ in $\textit{Graph}$: Relative-Time
Pretraining for Complex Temporal Reasoning [96.03608822291136]
We make use of the underlying nature of time, and suggest creating a graph structure based on the relative placements of events along the time axis.
Inspired by the graph view, we propose RemeMo, which explicitly connects all temporally-scoped facts by modeling the time relations between any two sentences.
Experimental results show that RemeMo outperforms the baseline T5 on multiple temporal question answering datasets.
arXiv Detail & Related papers (2023-10-23T08:49:00Z) - GestSync: Determining who is speaking without a talking head [67.75387744442727]
We introduce Gesture-Sync: determining if a person's gestures are correlated with their speech or not.
In comparison to Lip-Sync, Gesture-Sync is far more challenging as there is a far looser relationship between the voice and body movement.
We show that the model can be trained using self-supervised learning alone, and evaluate its performance on the LRS3 dataset.
arXiv Detail & Related papers (2023-10-08T22:48:30Z) - Sync+Sync: A Covert Channel Built on fsync with Storage [2.800768893804362]
We build a covert channel named Sync+Sync for persistent storage.
Sync+Sync delivers a transmission bandwidth of 20,000 bits per second at an error rate of about 0.40% with an ordinary solid-state drive.
We launch side-channel attacks with Sync+Sync and manage to precisely detect operations of a victim database.
arXiv Detail & Related papers (2023-09-14T12:22:29Z) - Prune Spatio-temporal Tokens by Semantic-aware Temporal Accumulation [89.88214896713846]
STA score considers two critical factors: temporal redundancy and semantic importance.
We apply the STA module to off-the-shelf video Transformers and Videowins.
Results: Kinetics-400 and Something-Something V2 achieve 30% overshelf reduction with a negligible 0.2% accuracy drop.
arXiv Detail & Related papers (2023-08-08T19:38:15Z) - OpenSync: An opensource platform for synchronizing multiple measures in
neuroscience experiments [0.0]
This paper introduces an opensource platform named OpenSync, which can be used to synchronize multiple measures in neuroscience experiments.
This platform helps to automatically integrate, synchronize and record physiological measures (e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, body motion, etc.), user input response (e.g., from mouse, keyboard, joystick, etc.), and task-related information (stimulus markers)
Our experimental results show that the OpenSync platform is able to synchronize multiple measures with microsecond resolution.
arXiv Detail & Related papers (2021-07-29T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.