No Substitute for Functionalism -- A Reply to 'Falsification &
Consciousness'
- URL: http://arxiv.org/abs/2006.13664v3
- Date: Fri, 30 Apr 2021 21:54:25 GMT
- Title: No Substitute for Functionalism -- A Reply to 'Falsification &
Consciousness'
- Authors: Natesh Ganesh
- Abstract summary: This reply identifies avenues of expansion for the model proposed in [1], allowing us to distinguish between different types of variation.
Motivated by examples from neural networks, state machines and Turing machines, we will prove that substitutions do not exist for a very broad class of Level-1 functionalist theories.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In their paper 'Falsification and Consciousness' [1], Kleiner and Hoel
introduced a formal mathematical model of the process of generating observable
data from experiments and using that data to generate inferences and
predictions onto an experience space. The resulting substitution argument built
on this framework was used to show that any theory of consciousness with
independent inference and prediction data are pre-falsified, if the inference
reports are considered valid. If this argument does indeed pre-falsify many of
the leading theories of consciousness, it would indicate a fundamental problem
affecting the field of consciousness as a whole that would require radical
changes to how consciousness science is performed. In this reply, the author
will identify avenues of expansion for the model proposed in [1], allowing us
to distinguish between different types of variation. Motivated by examples from
neural networks, state machines and Turing machines, we will prove that
substitutions do not exist for a very broad class of Level-1 functionalist
theories, rendering them immune to the aforementioned substitution argument.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Class-wise Activation Unravelling the Engima of Deep Double Descent [0.0]
Double descent presents a counter-intuitive aspect within the machine learning domain.
In this study, we revisited the phenomenon of double descent and discussed the conditions of its occurrence.
arXiv Detail & Related papers (2024-05-13T12:07:48Z) - Isopignistic Canonical Decomposition via Belief Evolution Network [12.459136964317942]
We propose an isopignistic transformation based on the belief evolution network.
This decomposition offers a reverse path between the possibility distribution and its isopignistic mass functions.
This paper establishes a theoretical basis for building general models of artificial intelligence based on probability theory, Dempster-Shafer theory, and possibility theory.
arXiv Detail & Related papers (2024-05-04T12:39:15Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Abduction and Argumentation for Explainable Machine Learning: A Position
Survey [2.28438857884398]
This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
arXiv Detail & Related papers (2020-10-24T13:23:44Z) - Formalizing Falsification for Theories of Consciousness Across
Computational Hierarchies [0.0]
Integrated Information Theory (IIT) is widely regarded as the preeminent theory of consciousness.
Epistemological issues in the form of the "unfolding argument" have provided a refutation of IIT.
We show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the state automaton level.
arXiv Detail & Related papers (2020-06-12T18:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.