Comment on Is Complexity an Illusion?
- URL: http://arxiv.org/abs/2411.08897v1
- Date: Tue, 29 Oct 2024 02:40:05 GMT
- Title: Comment on Is Complexity an Illusion?
- Authors: Gabriel Simmons,
- Abstract summary: "Is Complexity an Illusion?" (Bennett, 2024) provides a formalism for complexity, learning, inference, and generalization.
This reply shows that correct policies do not exist for a simple task of supervised multi-class classification.
- Score: 5.439020425819001
- License:
- Abstract: The paper "Is Complexity an Illusion?" (Bennett, 2024) provides a formalism for complexity, learning, inference, and generalization, and introduces a formal definition for a "policy". This reply shows that correct policies do not exist for a simple task of supervised multi-class classification, via mathematical proof and exhaustive search. Implications of this result are discussed, as well as possible responses and amendments to the theory.
Related papers
- Simplifying Adversarially Robust PAC Learning with Tolerance [9.973499768462888]
We show the existence of a simpler learner that achieves a sample complexity linear in the VC-dimension without requiring additional assumptions on H.
Even though our learner is improper, it is "almost proper" in the sense that it outputs a hypothesis that is "similar" to a hypothesis in H.
We also use the ideas from our algorithm to construct a semi-supervised learner in the tolerant setting.
arXiv Detail & Related papers (2025-02-11T03:48:40Z) - A Theory of Formalisms for Representing Knowledge [6.577225204907418]
There has been a longstanding dispute over which formalism is the best for representing knowledge in AI.
We propose a general framework to capture various knowledge representation formalisms in which we are interested.
arXiv Detail & Related papers (2024-12-16T15:13:30Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Is Complexity an Illusion? [0.0]
We show that all constraints can take equally simple forms, regardless of weakness.
If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation.
arXiv Detail & Related papers (2024-03-31T13:36:55Z) - A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments [59.87080148922358]
We argue that Makelov et al. (2023) see in practice are artifacts of their training and evaluation paradigms.
Though we disagree with their core characterization, Makelov et al. (2023)'s examples and discussion have undoubtedly pushed the field of interpretability forward.
arXiv Detail & Related papers (2024-01-23T10:27:42Z) - Discovering modular solutions that generalize compositionally [55.46688816816882]
We show that identification up to linear transformation purely from demonstrations is possible without having to learn an exponential number of module combinations.
We further demonstrate empirically that meta-learning from finite data can discover modular policies that generalize compositionally in a number of complex environments.
arXiv Detail & Related papers (2023-12-22T16:33:50Z) - Succinct Representations for Concepts [12.134564449202708]
Foundation models like chatGPT have demonstrated remarkable performance on various tasks.
However, for many questions, they may produce false answers that look accurate.
In this paper, we introduce succinct representations of concepts based on category theory.
arXiv Detail & Related papers (2023-03-01T12:11:23Z) - On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - On the Complexity of Bayesian Generalization [141.21610899086392]
We consider concept generalization at a large scale in the diverse and natural visual spectrum.
We study two modes when the problem space scales up, and the $complexity$ of concepts becomes diverse.
arXiv Detail & Related papers (2022-11-20T17:21:37Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.