Automated Strategy Invention for Confluence of Term Rewrite Systems
        - URL: http://arxiv.org/abs/2411.06409v1
 - Date: Sun, 10 Nov 2024 10:08:43 GMT
 - Title: Automated Strategy Invention for Confluence of Term Rewrite Systems
 - Authors: Liao Zhang, Fabian Mitterwallner, Jan Jakubuv, Cezary Kaliszyk, 
 - Abstract summary: We apply machine learning to develop the first learning-guided automatic confluence prover.
Our results focus on improving the state-of-the-art automatic confluence prover CSI: When equipped with our invented strategies, it surpasses its human-designed strategies both on the augmented dataset and on the original human-created benchmark dataset Cops.
 - Score: 3.662364375995991
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Term rewriting plays a crucial role in software verification and compiler optimization. With dozens of highly parameterizable techniques developed to prove various system properties, automatic term rewriting tools work in an extensive parameter space. This complexity exceeds human capacity for parameter selection, motivating an investigation into automated strategy invention. In this paper, we focus on confluence, an important property of term rewrite systems, and apply machine learning to develop the first learning-guided automatic confluence prover. Moreover, we randomly generate a large dataset to analyze confluence for term rewrite systems. Our results focus on improving the state-of-the-art automatic confluence prover CSI: When equipped with our invented strategies, it surpasses its human-designed strategies both on the augmented dataset and on the original human-created benchmark dataset Cops, proving/disproving the confluence of several term rewrite systems for which no automated proofs were known before. 
 
       
      
        Related papers
        - Agent0: Leveraging LLM Agents to Discover Multi-value Features from Text   for Enhanced Recommendations [0.0]
Large language models (LLMs) and their associated agent-based frameworks have significantly advanced automated information extraction.<n>This paper presents Agent0, an agent-based system designed to automate information extraction and feature construction from raw, unstructured text.
arXiv  Detail & Related papers  (2025-07-25T06:45:10Z) - A Systematic Review of Key Retrieval-Augmented Generation (RAG) Systems:   Progress, Gaps, and Future Directions [1.4931265249949528]
Retrieval-Augmented Generation (RAG) is a major advancement in natural language processing (NLP)<n>RAG combines large language models (LLMs) with information retrieval systems to enhance factual grounding, accuracy, and contextual relevance.<n>This paper presents a systematic review of RAG, tracing its evolution from early developments in open domain question answering to recent state-of-the-art implementations.
arXiv  Detail & Related papers  (2025-07-25T03:05:46Z) - On Automating Security Policies with Contemporary LLMs [3.47402794691087]
In this paper, we present a framework for automating attack mitigation policy compliance through an innovative combination of in-context learning and retrieval-augmented generation (RAG)<n>Our empirical evaluation, conducted using publicly available CTI policies in STIXv2 format and Windows API documentation, demonstrates significant improvements in precision, recall, and F1-score when employing RAG compared to a non-RAG baseline.
arXiv  Detail & Related papers  (2025-06-05T09:58:00Z) - Advanced Chain-of-Thought Reasoning for Parameter Extraction from   Documents Using Large Language Models [3.7324910012003656]
Current methods struggle to handle high-dimensional design data and meet the demands of real-time processing.
We propose an innovative framework that automates the extraction of parameters and the generation of PySpice models.
 Experimental results show that applying all three methods together improves retrieval precision by 47.69% and reduces processing latency by 37.84%.
arXiv  Detail & Related papers  (2025-02-23T11:19:44Z) - A Proposed Large Language Model-Based Smart Search for Archive System [0.0]
This study presents a novel framework for smart search in digital archival systems.
By employing a Retrieval-Augmented Generation (RAG) approach, the framework enables the processing of natural language queries.
We present the architecture and implementation of the system and evaluate its performance in four experiments.
arXiv  Detail & Related papers  (2025-01-13T02:53:07Z) - Agentic AI-Driven Technical Troubleshooting for Enterprise Systems: A   Novel Weighted Retrieval-Augmented Generation Paradigm [0.0]
This paper presents a novel agentic AI solution built on a Weighted Retrieval-Augmented Generation (RAG) Framework tailored for enterprise technical troubleshooting.
By dynamically weighting retrieval sources such as product manuals, internal knowledge bases, FAQ, and troubleshooting guides, the framework prioritizes the most relevant data.
Preliminary evaluations on large enterprise datasets demonstrate the framework's efficacy in improving troubleshooting accuracy, reducing resolution times, and adapting to varied technical challenges.
arXiv  Detail & Related papers  (2024-12-16T17:32:38Z) - Boosting CNN-based Handwriting Recognition Systems with Learnable   Relaxation Labeling [48.78361527873024]
We propose a novel approach to handwriting recognition that integrates the strengths of two distinct methodologies.
We introduce a sparsification technique that accelerates the convergence of the algorithm and enhances the overall system's performance.
arXiv  Detail & Related papers  (2024-09-09T15:12:28Z) - Advancing Cyber Incident Timeline Analysis Through Rule Based AI and   Large Language Models [0.0]
This paper introduces a novel framework, GenDFIR, which combines Rule-Based Artificial Intelligence (R-BAI) algorithms with Large Language Models (LLMs) to enhance and automate the Timeline Analysis process.
arXiv  Detail & Related papers  (2024-09-04T09:46:33Z) - Inference Optimization of Foundation Models on AI Accelerators [68.24450520773688]
Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI.
As the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios.
This tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators.
arXiv  Detail & Related papers  (2024-07-12T09:24:34Z) - "I understand why I got this grade": Automatic Short Answer Grading with   Feedback [36.74896284581596]
We present a dataset of 5.8k student answers accompanied by reference answers and questions for the Automatic Short Answer Grading (ASAG) task.
The EngSAF dataset is meticulously curated to cover a diverse range of subjects, questions, and answer patterns from multiple engineering domains.
arXiv  Detail & Related papers  (2024-06-30T15:42:18Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via   Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv  Detail & Related papers  (2024-06-22T11:17:50Z) - Thread Detection and Response Generation using Transformers with Prompt
  Optimisation [5.335657953493376]
This paper develops an end-to-end model that identifies threads and prioritises their response generation based on the importance.
The model achieves up to 10x speed improvement, while generating more coherent results compared to existing models.
arXiv  Detail & Related papers  (2024-03-09T14:50:20Z) - Toward Educator-focused Automated Scoring Systems for Reading and
  Writing [0.0]
This paper addresses the challenges of data and label availability, authentic and extended writing, domain scoring, prompt and source variety, and transfer learning.
It employs techniques that preserve essay length as an important feature without increasing model training costs.
arXiv  Detail & Related papers  (2021-12-22T15:44:30Z) - Sensitivity analysis in differentially private machine learning using
  hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv  Detail & Related papers  (2021-07-09T07:19:23Z) - On Learning Text Style Transfer with Direct Rewards [101.97136885111037]
Lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.
We leverage semantic similarity metrics originally used for fine-tuning neural machine translation models.
Our model provides significant gains in both automatic and human evaluation over strong baselines.
arXiv  Detail & Related papers  (2020-10-24T04:30:02Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
  Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv  Detail & Related papers  (2020-05-08T21:32:42Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.