Computational Complexity of Segmentation
- URL: http://arxiv.org/abs/2201.13106v1
- Date: Mon, 31 Jan 2022 10:33:03 GMT
- Title: Computational Complexity of Segmentation
- Authors: Federico Adolfi (Ernst-Str\"ungmann Institute for Neuroscience,
Frankfurt, Germany, University of Bristol, Bristol, UK), Todd Wareham
(Department of Computer Science, Memorial University of Newfoundland,
Canada), Iris van Rooij (Donders Institute for Brain, Cognition, and
Behaviour, Radboud University, The Netherlands)
- Abstract summary: The specification of cognitive system capacities is often shaped by unexamined intuitive assumptions about the search space and complexity of a subcomputation.
We prove two sets of results regarding hardness and search space size that may run counter to intuition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational feasibility is a widespread concern that guides the framing and
modeling of biological and artificial intelligence. The specification of
cognitive system capacities is often shaped by unexamined intuitive assumptions
about the search space and complexity of a subcomputation. However, a mistaken
intuition might make such initial conceptualizations misleading for what
empirical questions appear relevant later on. We undertake here
computational-level modeling and complexity analyses of segmentation - a widely
hypothesized subcomputation that plays a requisite role in explanations of
capacities across domains - as a case study to show how crucial it is to
formally assess these assumptions. We mathematically prove two sets of results
regarding hardness and search space size that may run counter to intuition, and
position their implications with respect to existing views on the subcapacity.
Related papers
- The Computational Complexity of Circuit Discovery for Inner Interpretability [0.30723404270319693]
We study circuit discovery with classical and parameterized computational complexity theory.
Our findings reveal a challenging complexity landscape.
This framework allows us to understand the scope and limits of interpretability queries.
arXiv Detail & Related papers (2024-10-10T15:22:48Z) - Advancing Algorithmic Approaches to Probabilistic Argumentation under the Constellation Approach [0.0]
We develop an algorithm for the complex task of computing the probability of a set of arguments being a complete extension.
An experimental evaluation shows promise of our approach.
arXiv Detail & Related papers (2024-07-06T12:08:38Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Balancing Explainability-Accuracy of Complex Models [8.402048778245165]
We introduce a new approach for complex models based on the co-relation impact.
We propose approaches for both scenarios of independent features and dependent features.
We provide an upper bound of the complexity of our proposed approach for the dependent features.
arXiv Detail & Related papers (2023-05-23T14:20:38Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Desiderata for Representation Learning: A Causal Perspective [104.3711759578494]
We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
arXiv Detail & Related papers (2021-09-08T17:33:54Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.