Forest Mixing: investigating the impact of multiple search trees and a
shared refinements pool on ontology learning
- URL: http://arxiv.org/abs/2309.17252v1
- Date: Fri, 29 Sep 2023 14:02:34 GMT
- Title: Forest Mixing: investigating the impact of multiple search trees and a
shared refinements pool on ontology learning
- Authors: Marco Pop-Mihali and Adrian Groza
- Abstract summary: We extend the Expression Learning for Ontology Engineering (CELOE) algorithm contained in the DL-Learner tool.
The aim is to foster a diverse set of starting classes and to streamline the process of finding class expressions in large search spaces.
- Score: 1.3597551064547502
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We aim at development white-box machine learning algorithms. We focus here on
algorithms for learning axioms in description logic. We extend the Class
Expression Learning for Ontology Engineering (CELOE) algorithm contained in the
DL-Learner tool. The approach uses multiple search trees and a shared pool of
refinements in order to split the search space in smaller subspaces. We
introduce the conjunction operation of best class expressions from each tree,
keeping the results which give the most information. The aim is to foster
exploration from a diverse set of starting classes and to streamline the
process of finding class expressions in ontologies. %, particularly in large
search spaces. The current implementation and settings indicated that the
Forest Mixing approach did not outperform the traditional CELOE. Despite these
results, the conceptual proposal brought forward by this approach may stimulate
future improvements in class expression finding in ontologies. % and influence.
% the way we traverse search spaces in general.
Related papers
- Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search [95.06503095273395]
o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research.
We present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms.
arXiv Detail & Related papers (2024-11-18T16:15:17Z) - Visual Prompt Selection for In-Context Learning Segmentation [77.15684360470152]
In this paper, we focus on rethinking and improving the example selection strategy.
We first demonstrate that ICL-based segmentation models are sensitive to different contexts.
Furthermore, empirical evidence indicates that the diversity of contextual prompts plays a crucial role in guiding segmentation.
arXiv Detail & Related papers (2024-07-14T15:02:54Z) - LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models [63.188607839223046]
This survey focuses on the benefits of scaling compute during inference.
We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation.
arXiv Detail & Related papers (2024-06-24T17:45:59Z) - AutoKG: Efficient Automated Knowledge Graph Generation for Language
Models [9.665916299598338]
AutoKG is a lightweight and efficient approach for automated knowledge graph construction.
Preliminary experiments demonstrate that AutoKG offers a more comprehensive and interconnected knowledge retrieval mechanism.
arXiv Detail & Related papers (2023-11-22T08:58:25Z) - Relation-aware Ensemble Learning for Knowledge Graph Embedding [68.94900786314666]
We propose to learn an ensemble by leveraging existing methods in a relation-aware manner.
exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods.
We propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently.
arXiv Detail & Related papers (2023-10-13T07:40:12Z) - Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models [17.059322033670124]
We propose a novel strategy that propels Large Language Models through algorithmic reasoning pathways.
Our results suggest that instructing an LLM using an algorithm can lead to performance surpassing that of the algorithm itself.
arXiv Detail & Related papers (2023-08-20T22:36:23Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.