A Structural Complexity Analysis of Synchronous Dynamical Systems
- URL: http://arxiv.org/abs/2312.08385v1
- Date: Tue, 12 Dec 2023 10:49:33 GMT
- Title: A Structural Complexity Analysis of Synchronous Dynamical Systems
- Authors: Eduard Eiben, Robert Ganian, Thekla Hamm, Viktoriia Korchemna
- Abstract summary: We study the three most notable problems in synchronous dynamic systems.
All three problems were known to be intractable in the classical sense.
We show that all three problems remain intractable even on instances of constant treewidth.
- Score: 33.788917274180484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synchronous dynamic systems are well-established models that have been used
to capture a range of phenomena in networks, including opinion diffusion,
spread of disease and product adoption. We study the three most notable
problems in synchronous dynamic systems: whether the system will transition to
a target configuration from a starting configuration, whether the system will
reach convergence from a starting configuration, and whether the system is
guaranteed to converge from every possible starting configuration. While all
three problems were known to be intractable in the classical sense, we initiate
the study of their exact boundaries of tractability from the perspective of
structural parameters of the network by making use of the more fine-grained
parameterized complexity paradigm.
As our first result, we consider treewidth - as the most prominent and
ubiquitous structural parameter - and show that all three problems remain
intractable even on instances of constant treewidth. We complement this
negative finding with fixed-parameter algorithms for the former two problems
parameterized by treedepth, a well-studied restriction of treewidth. While it
is possible to rule out a similar algorithm for convergence guarantee under
treedepth, we conclude with a fixed-parameter algorithm for this last problem
when parameterized by treedepth and the maximum in-degree.
Related papers
- Triadic-OCD: Asynchronous Online Change Detection with Provable Robustness, Optimality, and Convergence [2.1348070823841363]
This paper develops a triadic-OCD framework with certifiable robustness, provable optimality, and guaranteed convergence.
The proposed algorithm can be realized in a fully asynchronous distributed manner, easing the necessity of transmitting the data to a single server.
The non-asymptotic convergence property of Triadic-OCD is theoretically analyzed, and its complexity to achieve an $epsilon$-optimal point is derived.
arXiv Detail & Related papers (2024-05-03T10:10:11Z) - The Boundaries of Tractability in Hierarchical Task Network Planning [15.926731632774768]
We study the complexity-theoretic boundaries of tractability for three classical problems in the context of Hierarchical Task Network Planning.
We show that all three problems can be solved in time on primitive task networks of constant partial order width.
arXiv Detail & Related papers (2024-01-25T13:34:33Z) - The Complexity of Optimizing Atomic Congestion [14.845310803203724]
Atomic congestion games are a classic topic in network design, routing, and algorithmic game theory.
We show that the problem remains highly intractable even on extremely simple networks.
We conclude by extending our analysis towards the (even more challenging) min-max variant of the problem.
arXiv Detail & Related papers (2023-12-15T21:31:30Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - DepGraph: Towards Any Structural Pruning [68.40343338847664]
We study general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers.
We propose a general and fully automatic method, emphDependency Graph (DepGraph), to explicitly model the dependency between layers and comprehensively group parameters for pruning.
In this work, we extensively evaluate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a
arXiv Detail & Related papers (2023-01-30T14:02:33Z) - NeuralSI: Structural Parameter Identification in Nonlinear Dynamical
Systems [9.77270939559057]
This paper explores a new framework, dubbed NeuralSI, for structural identification.
Our approach seeks to estimate nonlinear parameters from governing equations.
The trained model can also be extrapolated under both standard and extreme conditions.
arXiv Detail & Related papers (2022-08-26T16:32:51Z) - Hyperparameter Tuning in Echo State Networks [0.0]
Echo State Networks are a type of recurrent neural network with a large randomly generated reservoir and a small number of readout connections trained via linear regression.
We propose an alternative approach of hyper parameter tuning based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
arXiv Detail & Related papers (2022-07-16T16:20:01Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.