Symbolic Snapshot Ensembles
- URL: http://arxiv.org/abs/2510.24633v1
- Date: Tue, 28 Oct 2025 17:01:38 GMT
- Title: Symbolic Snapshot Ensembles
- Authors: Mingyue Liu, Andrew Cropper,
- Abstract summary: In this paper, we train an ILP algorithm only once and save intermediate hypotheses.<n>Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.
- Score: 17.98221518812985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inductive logic programming (ILP) is a form of logical machine learning. Most ILP algorithms learn a single hypothesis from a single training run. Ensemble methods train an ILP algorithm multiple times to learn multiple hypotheses. In this paper, we train an ILP algorithm only once and save intermediate hypotheses. We then combine the hypotheses using a minimum description length weighting scheme. Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.
Related papers
- Symmetry breaking for inductive logic programming [23.251472351777934]
We introduce a method to break symmetries in the hypothesis space.<n>Our experiments on multiple domains, including visual reasoning and game playing, show that our approach can reduce solving times from over an hour to just 17 seconds.
arXiv Detail & Related papers (2025-08-08T12:28:42Z) - Agentic-R1: Distilled Dual-Strategy Reasoning [58.73951532294446]
Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces.<n>We introduce a fine-tuning framework, DualDistill, that distills complementary reasoning strategies from multiple teachers into a unified student model.<n>Our method improves accuracy across a range of tasks, including both computation-intensive and standard benchmarks.
arXiv Detail & Related papers (2025-07-08T06:35:16Z) - Honey, I shrunk the hypothesis space (through logical preprocessing) [19.54008511592332]
We introduce an approach that'shrinks' the hypothesis space before an ILP system searches it.<n>Our approach uses background knowledge to find rules that cannot be in an optimal hypothesis regardless of the training examples.<n>Our experiments show that our approach can substantially reduce learning times whilst maintaining predictive accuracies.
arXiv Detail & Related papers (2025-06-07T09:53:02Z) - Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs [76.43407125275202]
o1-like models can emulate human-like long-time thinking during inference.<n>This paper presents the first comprehensive study on the prevalent issue of overthinking in these models.<n>We propose strategies to mitigate overthinking, streamlining reasoning processes without compromising accuracy.
arXiv Detail & Related papers (2024-12-30T18:55:12Z) - Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two algorithms that enjoy provable scaling laws for the test-time compute of large language models.<n>One is a two-stage knockout-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.<n>The other is a two-stage league-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Learning logic programs by finding minimal unsatisfiable subprograms [24.31242130341093]
We introduce an ILP approach that identifies minimal unsatisfiable subprograms (MUSPs)
Our experiments on multiple domains, including program synthesis and game playing, show that our approach can reduce learning times by 99%.
arXiv Detail & Related papers (2024-01-29T18:24:16Z) - AdaStop: adaptive statistical testing for sound comparisons of Deep RL agents [17.481638913280403]
We propose a theoretically sound methodology for comparing the performance of a set of algorithms.<n>AdaStop is a new statistical test based on multiple group sequential tests.
arXiv Detail & Related papers (2023-06-19T12:22:56Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Learning logic programs by discovering where not to search [18.27510863075184]
We introduce an approach that, before searching for a hypothesis, first discovers where not to search'
We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd.
Our experiments on multiple domains show that our approach can substantially reduce learning times.
arXiv Detail & Related papers (2022-02-20T12:32:03Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - The Kikuchi Hierarchy and Tensor PCA [50.840260149979265]
For the tensor PCA (principal component analysis) problem, we propose a new hierarchy of increasingly powerful algorithms with increasing runtime.<n>Our hierarchy is analogous to the sum-of-squares (SOS) hierarchy but is instead inspired by statistical physics and related algorithms.
arXiv Detail & Related papers (2019-04-08T06:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.